Test Report: QEMU_macOS 16968

                    
                      3b33420a0c9ae0948b181bc91d502671e4007a23:2023-07-31:30376
                    
                

Test fail (87/244)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 26.72
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 10.07
22 TestAddons/Setup 42.5
23 TestCertOptions 10.24
24 TestCertExpiration 195.38
25 TestDockerFlags 10.12
26 TestForceSystemdFlag 10.63
27 TestForceSystemdEnv 10.04
72 TestFunctional/parallel/ServiceCmdConnect 31.45
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
139 TestImageBuild/serial/BuildWithBuildArg 1.16
148 TestIngressAddonLegacy/serial/ValidateIngressAddons 47.82
183 TestMountStart/serial/StartWithMountFirst 10.21
186 TestMultiNode/serial/FreshStart2Nodes 9.74
187 TestMultiNode/serial/DeployApp2Nodes 93.65
188 TestMultiNode/serial/PingHostFrom2Pods 0.08
189 TestMultiNode/serial/AddNode 0.07
190 TestMultiNode/serial/ProfileList 0.16
191 TestMultiNode/serial/CopyFile 0.06
192 TestMultiNode/serial/StopNode 0.13
193 TestMultiNode/serial/StartAfterStop 0.1
194 TestMultiNode/serial/RestartKeepsNodes 5.38
195 TestMultiNode/serial/DeleteNode 0.1
196 TestMultiNode/serial/StopMultiNode 0.15
197 TestMultiNode/serial/RestartMultiNode 5.25
198 TestMultiNode/serial/ValidateNameConflict 19.73
202 TestPreload 9.86
204 TestScheduledStopUnix 9.98
205 TestSkaffold 14.13
208 TestRunningBinaryUpgrade 127.84
210 TestKubernetesUpgrade 15.32
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.43
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.14
225 TestStoppedBinaryUpgrade/Setup 166.03
227 TestPause/serial/Start 9.86
237 TestNoKubernetes/serial/StartWithK8s 9.88
238 TestNoKubernetes/serial/StartWithStopK8s 5.48
239 TestNoKubernetes/serial/Start 5.47
243 TestNoKubernetes/serial/StartNoArgs 5.47
245 TestNetworkPlugins/group/auto/Start 9.78
246 TestNetworkPlugins/group/kindnet/Start 9.73
247 TestNetworkPlugins/group/calico/Start 9.73
248 TestNetworkPlugins/group/custom-flannel/Start 9.86
249 TestNetworkPlugins/group/false/Start 9.65
250 TestNetworkPlugins/group/enable-default-cni/Start 9.74
251 TestNetworkPlugins/group/flannel/Start 9.73
252 TestNetworkPlugins/group/bridge/Start 9.87
253 TestStoppedBinaryUpgrade/Upgrade 1.35
254 TestStoppedBinaryUpgrade/MinikubeLogs 0.08
255 TestNetworkPlugins/group/kubenet/Start 9.84
257 TestStartStop/group/old-k8s-version/serial/FirstStart 10.66
259 TestStartStop/group/no-preload/serial/FirstStart 9.93
260 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
261 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
264 TestStartStop/group/old-k8s-version/serial/SecondStart 6.96
265 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
266 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
267 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
268 TestStartStop/group/old-k8s-version/serial/Pause 0.1
270 TestStartStop/group/embed-certs/serial/FirstStart 11.31
271 TestStartStop/group/no-preload/serial/DeployApp 0.1
272 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
275 TestStartStop/group/no-preload/serial/SecondStart 7.02
276 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
277 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
278 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
279 TestStartStop/group/no-preload/serial/Pause 0.1
281 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.12
282 TestStartStop/group/embed-certs/serial/DeployApp 0.09
283 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
286 TestStartStop/group/embed-certs/serial/SecondStart 7.02
287 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/embed-certs/serial/Pause 0.1
292 TestStartStop/group/newest-cni/serial/FirstStart 11.36
293 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
294 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
297 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.95
298 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
306 TestStartStop/group/newest-cni/serial/SecondStart 5.24
309 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (26.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-435000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-435000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (26.720650917s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1e1436ca-4dff-4f3c-995a-c9fa8d16a092","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-435000] minikube v1.31.1 on Darwin 13.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a1a9ca96-7e12-43d8-85a6-f194dbfa089c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16968"}}
	{"specversion":"1.0","id":"e2acbea3-bb58-4496-bca2-ee8b86691c0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig"}}
	{"specversion":"1.0","id":"a74f6c9b-2118-4f7b-ab6c-65ae2b8fa94f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"f04fe7a5-4380-43d9-bd76-098cc5491364","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ef35f91a-f513-4572-9fc9-4d579d63d5a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube"}}
	{"specversion":"1.0","id":"95616a4f-f03e-4ff2-9cb1-3946a8de27f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"848e23e6-de68-42e3-a258-71d49a8311c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"73e4fe08-a0cd-49ae-a82c-6931594ab3aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"0a166699-ed3a-4468-b250-c428c4c70053","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"eec7f939-9328-44f0-a994-4da9072fe77a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-435000 in cluster download-only-435000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3dbd5de-0a17-43e9-8b77-c7431ccd06c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e8d726f-540e-46a8-aeb6-2f362706a4e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1063d8690 0x1063d8690 0x1063d8690 0x1063d8690 0x1063d8690 0x1063d8690 0x1063d8690] Decompressors:map[bz2:0x140001e7340 gz:0x140001e7348 tar:0x140001e72a0 tar.bz2:0x140001e72d0 tar.gz:0x140001e72e0 tar.xz:0x140001e72f0 tar.zst:0x140001e7330 tbz2:0x140001e72d0 tgz:0x140001
e72e0 txz:0x140001e72f0 tzst:0x140001e7330 xz:0x140001e7350 zip:0x140001e7370 zst:0x140001e7358] Getters:map[file:0x14000f3c5b0 http:0x140007aa190 https:0x140007aa1e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"d87490d5-a361-4c81-85d1-f6533b2c4123","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 03:53:36.833088    5225 out.go:296] Setting OutFile to fd 1 ...
	I0731 03:53:36.833201    5225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 03:53:36.833205    5225 out.go:309] Setting ErrFile to fd 2...
	I0731 03:53:36.833207    5225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 03:53:36.833341    5225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	W0731 03:53:36.833400    5225 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/16968-4815/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16968-4815/.minikube/config/config.json: no such file or directory
	I0731 03:53:36.834542    5225 out.go:303] Setting JSON to true
	I0731 03:53:36.852302    5225 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8587,"bootTime":1690792229,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 03:53:36.852385    5225 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 03:53:36.857693    5225 out.go:97] [download-only-435000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 03:53:36.860836    5225 out.go:169] MINIKUBE_LOCATION=16968
	I0731 03:53:36.857788    5225 notify.go:220] Checking for updates...
	W0731 03:53:36.857807    5225 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 03:53:36.867623    5225 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 03:53:36.870820    5225 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 03:53:36.873865    5225 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 03:53:36.876842    5225 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	W0731 03:53:36.882798    5225 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 03:53:36.883003    5225 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 03:53:36.885820    5225 out.go:97] Using the qemu2 driver based on user configuration
	I0731 03:53:36.885838    5225 start.go:298] selected driver: qemu2
	I0731 03:53:36.885841    5225 start.go:898] validating driver "qemu2" against <nil>
	I0731 03:53:36.885896    5225 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 03:53:36.888829    5225 out.go:169] Automatically selected the socket_vmnet network
	I0731 03:53:36.894029    5225 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 03:53:36.894118    5225 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 03:53:36.894178    5225 cni.go:84] Creating CNI manager for ""
	I0731 03:53:36.894195    5225 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 03:53:36.894204    5225 start_flags.go:319] config:
	{Name:download-only-435000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-435000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: N
etworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 03:53:36.898740    5225 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 03:53:36.903158    5225 out.go:97] Downloading VM boot image ...
	I0731 03:53:36.903199    5225 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso
	I0731 03:53:50.268959    5225 out.go:97] Starting control plane node download-only-435000 in cluster download-only-435000
	I0731 03:53:50.268970    5225 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0731 03:53:50.367320    5225 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0731 03:53:50.367370    5225 cache.go:57] Caching tarball of preloaded images
	I0731 03:53:50.368297    5225 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0731 03:53:50.372599    5225 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0731 03:53:50.372608    5225 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 03:53:50.593729    5225 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0731 03:54:02.354713    5225 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 03:54:02.354887    5225 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 03:54:02.996549    5225 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0731 03:54:02.996744    5225 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/download-only-435000/config.json ...
	I0731 03:54:02.996764    5225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/download-only-435000/config.json: {Name:mk709c968bf792ce50f91e1c718b5910675af98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:02.997024    5225 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0731 03:54:02.997189    5225 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0731 03:54:03.479156    5225 out.go:169] 
	W0731 03:54:03.483187    5225 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1063d8690 0x1063d8690 0x1063d8690 0x1063d8690 0x1063d8690 0x1063d8690 0x1063d8690] Decompressors:map[bz2:0x140001e7340 gz:0x140001e7348 tar:0x140001e72a0 tar.bz2:0x140001e72d0 tar.gz:0x140001e72e0 tar.xz:0x140001e72f0 tar.zst:0x140001e7330 tbz2:0x140001e72d0 tgz:0x140001e72e0 txz:0x140001e72f0 tzst:0x140001e7330 xz:0x140001e7350 zip:0x140001e7370 zst:0x140001e7358] Getters:map[file:0x14000f3c5b0 http:0x140007aa190 https:0x140007aa1e0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0731 03:54:03.483212    5225 out_reason.go:110] 
	W0731 03:54:03.492059    5225 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 03:54:03.496092    5225 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-435000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (26.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-750000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-750000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.8962935s)

                                                
                                                
-- stdout --
	* [offline-docker-750000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-750000 in cluster offline-docker-750000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-750000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:10:50.656078    6989 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:10:50.656215    6989 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:10:50.656218    6989 out.go:309] Setting ErrFile to fd 2...
	I0731 04:10:50.656221    6989 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:10:50.656335    6989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:10:50.657525    6989 out.go:303] Setting JSON to false
	I0731 04:10:50.674063    6989 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9621,"bootTime":1690792229,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:10:50.674156    6989 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:10:50.681687    6989 out.go:177] * [offline-docker-750000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:10:50.685881    6989 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:10:50.685987    6989 notify.go:220] Checking for updates...
	I0731 04:10:50.693689    6989 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:10:50.697730    6989 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:10:50.700635    6989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:10:50.703706    6989 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:10:50.706705    6989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:10:50.709927    6989 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:10:50.709971    6989 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:10:50.713611    6989 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:10:50.719595    6989 start.go:298] selected driver: qemu2
	I0731 04:10:50.719601    6989 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:10:50.719613    6989 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:10:50.721609    6989 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:10:50.724639    6989 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:10:50.728732    6989 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:10:50.728752    6989 cni.go:84] Creating CNI manager for ""
	I0731 04:10:50.728763    6989 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:10:50.728767    6989 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:10:50.728772    6989 start_flags.go:319] config:
	{Name:offline-docker-750000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:offline-docker-750000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:10:50.733038    6989 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:10:50.736617    6989 out.go:177] * Starting control plane node offline-docker-750000 in cluster offline-docker-750000
	I0731 04:10:50.740698    6989 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:10:50.740726    6989 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:10:50.740736    6989 cache.go:57] Caching tarball of preloaded images
	I0731 04:10:50.740804    6989 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:10:50.740809    6989 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:10:50.740872    6989 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/offline-docker-750000/config.json ...
	I0731 04:10:50.740884    6989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/offline-docker-750000/config.json: {Name:mkac975178397a6748aaeff2d9fa7c65f8396748 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:10:50.741105    6989 start.go:365] acquiring machines lock for offline-docker-750000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:10:50.741146    6989 start.go:369] acquired machines lock for "offline-docker-750000" in 29.583µs
	I0731 04:10:50.741158    6989 start.go:93] Provisioning new machine with config: &{Name:offline-docker-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:offli
ne-docker-750000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:10:50.741195    6989 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:10:50.748669    6989 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 04:10:50.762953    6989 start.go:159] libmachine.API.Create for "offline-docker-750000" (driver="qemu2")
	I0731 04:10:50.762987    6989 client.go:168] LocalClient.Create starting
	I0731 04:10:50.763069    6989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:10:50.763089    6989 main.go:141] libmachine: Decoding PEM data...
	I0731 04:10:50.763102    6989 main.go:141] libmachine: Parsing certificate...
	I0731 04:10:50.763154    6989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:10:50.763168    6989 main.go:141] libmachine: Decoding PEM data...
	I0731 04:10:50.763176    6989 main.go:141] libmachine: Parsing certificate...
	I0731 04:10:50.763499    6989 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:10:50.884729    6989 main.go:141] libmachine: Creating SSH key...
	I0731 04:10:51.114957    6989 main.go:141] libmachine: Creating Disk image...
	I0731 04:10:51.114965    6989 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:10:51.115103    6989 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/disk.qcow2
	I0731 04:10:51.123766    6989 main.go:141] libmachine: STDOUT: 
	I0731 04:10:51.123785    6989 main.go:141] libmachine: STDERR: 
	I0731 04:10:51.123886    6989 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/disk.qcow2 +20000M
	I0731 04:10:51.131984    6989 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:10:51.132005    6989 main.go:141] libmachine: STDERR: 
	I0731 04:10:51.132031    6989 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/disk.qcow2
	I0731 04:10:51.132040    6989 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:10:51.132074    6989 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:6f:b1:fb:83:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/disk.qcow2
	I0731 04:10:51.133891    6989 main.go:141] libmachine: STDOUT: 
	I0731 04:10:51.133909    6989 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:10:51.133926    6989 client.go:171] LocalClient.Create took 370.942334ms
	I0731 04:10:53.135943    6989 start.go:128] duration metric: createHost completed in 2.394794792s
	I0731 04:10:53.135963    6989 start.go:83] releasing machines lock for "offline-docker-750000", held for 2.394867791s
	W0731 04:10:53.135975    6989 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:10:53.146090    6989 out.go:177] * Deleting "offline-docker-750000" in qemu2 ...
	W0731 04:10:53.153481    6989 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:10:53.153492    6989 start.go:687] Will try again in 5 seconds ...
	I0731 04:10:58.155646    6989 start.go:365] acquiring machines lock for offline-docker-750000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:10:58.156031    6989 start.go:369] acquired machines lock for "offline-docker-750000" in 278.375µs
	I0731 04:10:58.156156    6989 start.go:93] Provisioning new machine with config: &{Name:offline-docker-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:offli
ne-docker-750000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:10:58.156441    6989 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:10:58.164072    6989 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 04:10:58.212503    6989 start.go:159] libmachine.API.Create for "offline-docker-750000" (driver="qemu2")
	I0731 04:10:58.212560    6989 client.go:168] LocalClient.Create starting
	I0731 04:10:58.212691    6989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:10:58.212750    6989 main.go:141] libmachine: Decoding PEM data...
	I0731 04:10:58.212773    6989 main.go:141] libmachine: Parsing certificate...
	I0731 04:10:58.212854    6989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:10:58.212883    6989 main.go:141] libmachine: Decoding PEM data...
	I0731 04:10:58.212896    6989 main.go:141] libmachine: Parsing certificate...
	I0731 04:10:58.213409    6989 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:10:58.343354    6989 main.go:141] libmachine: Creating SSH key...
	I0731 04:10:58.467705    6989 main.go:141] libmachine: Creating Disk image...
	I0731 04:10:58.467714    6989 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:10:58.467898    6989 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/disk.qcow2
	I0731 04:10:58.476357    6989 main.go:141] libmachine: STDOUT: 
	I0731 04:10:58.476371    6989 main.go:141] libmachine: STDERR: 
	I0731 04:10:58.476430    6989 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/disk.qcow2 +20000M
	I0731 04:10:58.483564    6989 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:10:58.483576    6989 main.go:141] libmachine: STDERR: 
	I0731 04:10:58.483589    6989 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/disk.qcow2
	I0731 04:10:58.483596    6989 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:10:58.483640    6989 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:23:24:61:59:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/offline-docker-750000/disk.qcow2
	I0731 04:10:58.485120    6989 main.go:141] libmachine: STDOUT: 
	I0731 04:10:58.485132    6989 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:10:58.485145    6989 client.go:171] LocalClient.Create took 272.586625ms
	I0731 04:11:00.487263    6989 start.go:128] duration metric: createHost completed in 2.330825792s
	I0731 04:11:00.487346    6989 start.go:83] releasing machines lock for "offline-docker-750000", held for 2.331343042s
	W0731 04:11:00.487761    6989 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-750000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-750000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:11:00.497906    6989 out.go:177] 
	W0731 04:11:00.502145    6989 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:11:00.502192    6989 out.go:239] * 
	* 
	W0731 04:11:00.504802    6989 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:11:00.512045    6989 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-750000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-07-31 04:11:00.525391 -0700 PDT m=+1043.853359918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-750000 -n offline-docker-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-750000 -n offline-docker-750000: exit status 7 (69.299542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-750000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-750000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-750000
--- FAIL: TestOffline (10.07s)

                                                
                                    
x
+
TestAddons/Setup (42.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-756000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-756000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (42.494293167s)

                                                
                                                
-- stdout --
	* [addons-756000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-756000 in cluster addons-756000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying ingress addon...
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	* Verifying csi-hostpath-driver addon...
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	* Verifying registry addon...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 03:54:16.257822    5296 out.go:296] Setting OutFile to fd 1 ...
	I0731 03:54:16.257954    5296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 03:54:16.257957    5296 out.go:309] Setting ErrFile to fd 2...
	I0731 03:54:16.257960    5296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 03:54:16.258065    5296 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 03:54:16.259124    5296 out.go:303] Setting JSON to false
	I0731 03:54:16.274042    5296 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8627,"bootTime":1690792229,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 03:54:16.274120    5296 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 03:54:16.277753    5296 out.go:177] * [addons-756000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 03:54:16.280723    5296 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 03:54:16.284564    5296 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 03:54:16.280764    5296 notify.go:220] Checking for updates...
	I0731 03:54:16.292714    5296 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 03:54:16.295605    5296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 03:54:16.298715    5296 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 03:54:16.301696    5296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 03:54:16.304736    5296 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 03:54:16.308667    5296 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 03:54:16.315670    5296 start.go:298] selected driver: qemu2
	I0731 03:54:16.315676    5296 start.go:898] validating driver "qemu2" against <nil>
	I0731 03:54:16.315682    5296 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 03:54:16.317490    5296 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 03:54:16.320669    5296 out.go:177] * Automatically selected the socket_vmnet network
	I0731 03:54:16.323801    5296 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 03:54:16.323827    5296 cni.go:84] Creating CNI manager for ""
	I0731 03:54:16.323834    5296 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 03:54:16.323839    5296 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 03:54:16.323846    5296 start_flags.go:319] config:
	{Name:addons-756000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-756000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 03:54:16.327898    5296 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 03:54:16.334677    5296 out.go:177] * Starting control plane node addons-756000 in cluster addons-756000
	I0731 03:54:16.338685    5296 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 03:54:16.338710    5296 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 03:54:16.338722    5296 cache.go:57] Caching tarball of preloaded images
	I0731 03:54:16.338787    5296 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 03:54:16.338792    5296 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 03:54:16.339001    5296 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/config.json ...
	I0731 03:54:16.339013    5296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/config.json: {Name:mk022d42a08016938471a37d433ad18a6662427a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:16.339226    5296 start.go:365] acquiring machines lock for addons-756000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 03:54:16.339330    5296 start.go:369] acquired machines lock for "addons-756000" in 97.5µs
	I0731 03:54:16.339341    5296 start.go:93] Provisioning new machine with config: &{Name:addons-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-756000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 03:54:16.339368    5296 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 03:54:16.347584    5296 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 03:54:16.372912    5296 start.go:159] libmachine.API.Create for "addons-756000" (driver="qemu2")
	I0731 03:54:16.372950    5296 client.go:168] LocalClient.Create starting
	I0731 03:54:16.373049    5296 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 03:54:16.479634    5296 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 03:54:16.533927    5296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 03:54:16.718235    5296 main.go:141] libmachine: Creating SSH key...
	I0731 03:54:16.859068    5296 main.go:141] libmachine: Creating Disk image...
	I0731 03:54:16.859074    5296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 03:54:16.859225    5296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/disk.qcow2
	I0731 03:54:16.868892    5296 main.go:141] libmachine: STDOUT: 
	I0731 03:54:16.868960    5296 main.go:141] libmachine: STDERR: 
	I0731 03:54:16.869031    5296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/disk.qcow2 +20000M
	I0731 03:54:16.876201    5296 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 03:54:16.876234    5296 main.go:141] libmachine: STDERR: 
	I0731 03:54:16.876249    5296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/disk.qcow2
	I0731 03:54:16.876256    5296 main.go:141] libmachine: Starting QEMU VM...
	I0731 03:54:16.876302    5296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:0a:47:ac:f4:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/disk.qcow2
	I0731 03:54:16.911148    5296 main.go:141] libmachine: STDOUT: 
	I0731 03:54:16.911181    5296 main.go:141] libmachine: STDERR: 
	I0731 03:54:16.911185    5296 main.go:141] libmachine: Attempt 0
	I0731 03:54:16.911201    5296 main.go:141] libmachine: Searching for 22:a:47:ac:f4:b8 in /var/db/dhcpd_leases ...
	I0731 03:54:16.911453    5296 main.go:141] libmachine: Found 10 entries in /var/db/dhcpd_leases!
	I0731 03:54:16.911475    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 03:54:16.911484    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 03:54:16.911490    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 03:54:16.911495    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 03:54:16.911501    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 03:54:16.911506    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 03:54:16.911512    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 03:54:16.911518    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 03:54:16.911524    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 03:54:16.911529    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 03:54:18.913673    5296 main.go:141] libmachine: Attempt 1
	I0731 03:54:18.913760    5296 main.go:141] libmachine: Searching for 22:a:47:ac:f4:b8 in /var/db/dhcpd_leases ...
	I0731 03:54:18.914228    5296 main.go:141] libmachine: Found 10 entries in /var/db/dhcpd_leases!
	I0731 03:54:18.914281    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 03:54:18.914312    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 03:54:18.914343    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 03:54:18.914372    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 03:54:18.914401    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 03:54:18.914430    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 03:54:18.914460    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 03:54:18.914683    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 03:54:18.914718    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 03:54:18.914746    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 03:54:20.916876    5296 main.go:141] libmachine: Attempt 2
	I0731 03:54:20.916917    5296 main.go:141] libmachine: Searching for 22:a:47:ac:f4:b8 in /var/db/dhcpd_leases ...
	I0731 03:54:20.917036    5296 main.go:141] libmachine: Found 10 entries in /var/db/dhcpd_leases!
	I0731 03:54:20.917047    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 03:54:20.917053    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 03:54:20.917060    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 03:54:20.917085    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 03:54:20.917090    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 03:54:20.917096    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 03:54:20.917102    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 03:54:20.917107    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 03:54:20.917113    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 03:54:20.917117    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 03:54:22.919143    5296 main.go:141] libmachine: Attempt 3
	I0731 03:54:22.919152    5296 main.go:141] libmachine: Searching for 22:a:47:ac:f4:b8 in /var/db/dhcpd_leases ...
	I0731 03:54:22.919185    5296 main.go:141] libmachine: Found 10 entries in /var/db/dhcpd_leases!
	I0731 03:54:22.919199    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 03:54:22.919207    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 03:54:22.919211    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 03:54:22.919216    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 03:54:22.919221    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 03:54:22.919230    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 03:54:22.919236    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 03:54:22.919241    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 03:54:22.919245    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 03:54:22.919252    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 03:54:24.921266    5296 main.go:141] libmachine: Attempt 4
	I0731 03:54:24.921280    5296 main.go:141] libmachine: Searching for 22:a:47:ac:f4:b8 in /var/db/dhcpd_leases ...
	I0731 03:54:24.921402    5296 main.go:141] libmachine: Found 10 entries in /var/db/dhcpd_leases!
	I0731 03:54:24.921435    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 03:54:24.921446    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 03:54:24.921452    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 03:54:24.921457    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 03:54:24.921462    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 03:54:24.921468    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 03:54:24.921473    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 03:54:24.921478    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 03:54:24.921483    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 03:54:24.921489    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 03:54:26.923551    5296 main.go:141] libmachine: Attempt 5
	I0731 03:54:26.923572    5296 main.go:141] libmachine: Searching for 22:a:47:ac:f4:b8 in /var/db/dhcpd_leases ...
	I0731 03:54:26.923651    5296 main.go:141] libmachine: Found 10 entries in /var/db/dhcpd_leases!
	I0731 03:54:26.923660    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 03:54:26.923666    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 03:54:26.923671    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 03:54:26.923677    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 03:54:26.923687    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 03:54:26.923692    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 03:54:26.923698    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 03:54:26.923703    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 03:54:26.923708    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 03:54:26.923714    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 03:54:28.925790    5296 main.go:141] libmachine: Attempt 6
	I0731 03:54:28.925820    5296 main.go:141] libmachine: Searching for 22:a:47:ac:f4:b8 in /var/db/dhcpd_leases ...
	I0731 03:54:28.925944    5296 main.go:141] libmachine: Found 11 entries in /var/db/dhcpd_leases!
	I0731 03:54:28.925957    5296 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:22:a:47:ac:f4:b8 ID:1,22:a:47:ac:f4:b8 Lease:0x64c8e463}
	I0731 03:54:28.925962    5296 main.go:141] libmachine: Found match: 22:a:47:ac:f4:b8
	I0731 03:54:28.925973    5296 main.go:141] libmachine: IP: 192.168.105.12
	I0731 03:54:28.925979    5296 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.12)...
	I0731 03:54:30.946998    5296 machine.go:88] provisioning docker machine ...
	I0731 03:54:30.947068    5296 buildroot.go:166] provisioning hostname "addons-756000"
	I0731 03:54:30.948348    5296 main.go:141] libmachine: Using SSH client type: native
	I0731 03:54:30.949131    5296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b19170] 0x100b1bbd0 <nil>  [] 0s} 192.168.105.12 22 <nil> <nil>}
	I0731 03:54:30.949149    5296 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-756000 && echo "addons-756000" | sudo tee /etc/hostname
	I0731 03:54:31.028079    5296 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-756000
	
	I0731 03:54:31.028203    5296 main.go:141] libmachine: Using SSH client type: native
	I0731 03:54:31.028696    5296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b19170] 0x100b1bbd0 <nil>  [] 0s} 192.168.105.12 22 <nil> <nil>}
	I0731 03:54:31.028710    5296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-756000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-756000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-756000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 03:54:31.095437    5296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 03:54:31.095454    5296 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16968-4815/.minikube CaCertPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16968-4815/.minikube}
	I0731 03:54:31.095485    5296 buildroot.go:174] setting up certificates
	I0731 03:54:31.095504    5296 provision.go:83] configureAuth start
	I0731 03:54:31.095512    5296 provision.go:138] copyHostCerts
	I0731 03:54:31.095678    5296 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16968-4815/.minikube/key.pem (1675 bytes)
	I0731 03:54:31.096000    5296 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.pem (1078 bytes)
	I0731 03:54:31.096129    5296 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16968-4815/.minikube/cert.pem (1123 bytes)
	I0731 03:54:31.096230    5296 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca-key.pem org=jenkins.addons-756000 san=[192.168.105.12 192.168.105.12 localhost 127.0.0.1 minikube addons-756000]
	I0731 03:54:31.184232    5296 provision.go:172] copyRemoteCerts
	I0731 03:54:31.184293    5296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 03:54:31.184318    5296 sshutil.go:53] new ssh client: &{IP:192.168.105.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/id_rsa Username:docker}
	I0731 03:54:31.216987    5296 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 03:54:31.223625    5296 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 03:54:31.230942    5296 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 03:54:31.238541    5296 provision.go:86] duration metric: configureAuth took 143.031334ms
	I0731 03:54:31.238548    5296 buildroot.go:189] setting minikube options for container-runtime
	I0731 03:54:31.238648    5296 config.go:182] Loaded profile config "addons-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 03:54:31.238681    5296 main.go:141] libmachine: Using SSH client type: native
	I0731 03:54:31.238900    5296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b19170] 0x100b1bbd0 <nil>  [] 0s} 192.168.105.12 22 <nil> <nil>}
	I0731 03:54:31.238904    5296 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 03:54:31.295845    5296 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 03:54:31.295851    5296 buildroot.go:70] root file system type: tmpfs
	I0731 03:54:31.295909    5296 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 03:54:31.295958    5296 main.go:141] libmachine: Using SSH client type: native
	I0731 03:54:31.296203    5296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b19170] 0x100b1bbd0 <nil>  [] 0s} 192.168.105.12 22 <nil> <nil>}
	I0731 03:54:31.296240    5296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 03:54:31.357633    5296 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 03:54:31.357681    5296 main.go:141] libmachine: Using SSH client type: native
	I0731 03:54:31.357926    5296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b19170] 0x100b1bbd0 <nil>  [] 0s} 192.168.105.12 22 <nil> <nil>}
	I0731 03:54:31.357935    5296 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 03:54:31.699439    5296 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 03:54:31.699453    5296 machine.go:91] provisioned docker machine in 752.428541ms
	I0731 03:54:31.699459    5296 client.go:171] LocalClient.Create took 15.32654525s
	I0731 03:54:31.699466    5296 start.go:167] duration metric: libmachine.API.Create for "addons-756000" took 15.326598333s
	I0731 03:54:31.699470    5296 start.go:300] post-start starting for "addons-756000" (driver="qemu2")
	I0731 03:54:31.699475    5296 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 03:54:31.699542    5296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 03:54:31.699552    5296 sshutil.go:53] new ssh client: &{IP:192.168.105.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/id_rsa Username:docker}
	I0731 03:54:31.729197    5296 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 03:54:31.730630    5296 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 03:54:31.730637    5296 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16968-4815/.minikube/addons for local assets ...
	I0731 03:54:31.730701    5296 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16968-4815/.minikube/files for local assets ...
	I0731 03:54:31.730725    5296 start.go:303] post-start completed in 31.252625ms
	I0731 03:54:31.731076    5296 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/config.json ...
	I0731 03:54:31.731224    5296 start.go:128] duration metric: createHost completed in 15.391893542s
	I0731 03:54:31.731260    5296 main.go:141] libmachine: Using SSH client type: native
	I0731 03:54:31.731481    5296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b19170] 0x100b1bbd0 <nil>  [] 0s} 192.168.105.12 22 <nil> <nil>}
	I0731 03:54:31.731485    5296 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 03:54:31.782889    5296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1690800871.481528377
	
	I0731 03:54:31.782899    5296 fix.go:206] guest clock: 1690800871.481528377
	I0731 03:54:31.782904    5296 fix.go:219] Guest: 2023-07-31 03:54:31.481528377 -0700 PDT Remote: 2023-07-31 03:54:31.731227 -0700 PDT m=+15.492249584 (delta=-249.698623ms)
	I0731 03:54:31.782921    5296 fix.go:190] guest clock delta is within tolerance: -249.698623ms
	I0731 03:54:31.782924    5296 start.go:83] releasing machines lock for "addons-756000", held for 15.443630167s
	I0731 03:54:31.783292    5296 ssh_runner.go:195] Run: cat /version.json
	I0731 03:54:31.783301    5296 sshutil.go:53] new ssh client: &{IP:192.168.105.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/id_rsa Username:docker}
	I0731 03:54:31.786189    5296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 03:54:31.786248    5296 sshutil.go:53] new ssh client: &{IP:192.168.105.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/id_rsa Username:docker}
	I0731 03:54:31.810657    5296 ssh_runner.go:195] Run: systemctl --version
	I0731 03:54:31.812712    5296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 03:54:31.814530    5296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 03:54:31.814560    5296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 03:54:31.819567    5296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 03:54:31.819576    5296 start.go:466] detecting cgroup driver to use...
	I0731 03:54:31.819669    5296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 03:54:31.860617    5296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0731 03:54:31.863885    5296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 03:54:31.867320    5296 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 03:54:31.867358    5296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 03:54:31.870561    5296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 03:54:31.873891    5296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 03:54:31.876728    5296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 03:54:31.879598    5296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 03:54:31.882911    5296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 03:54:31.886203    5296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 03:54:31.888828    5296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 03:54:31.891472    5296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 03:54:31.978524    5296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 03:54:31.985187    5296 start.go:466] detecting cgroup driver to use...
	I0731 03:54:31.985246    5296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 03:54:31.993079    5296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 03:54:31.997228    5296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 03:54:32.003304    5296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 03:54:32.008620    5296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 03:54:32.013234    5296 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 03:54:32.046551    5296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 03:54:32.051929    5296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 03:54:32.057465    5296 ssh_runner.go:195] Run: which cri-dockerd
	I0731 03:54:32.058799    5296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 03:54:32.061653    5296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 03:54:32.066388    5296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 03:54:32.141418    5296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 03:54:32.217423    5296 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 03:54:32.217439    5296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0731 03:54:32.222661    5296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 03:54:32.296307    5296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 03:54:33.456735    5296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.160416208s)
	I0731 03:54:33.456811    5296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 03:54:33.537615    5296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 03:54:33.616996    5296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 03:54:33.701808    5296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 03:54:33.776492    5296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 03:54:33.783806    5296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 03:54:33.881577    5296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0731 03:54:33.904405    5296 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 03:54:33.905223    5296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 03:54:33.907470    5296 start.go:534] Will wait 60s for crictl version
	I0731 03:54:33.907513    5296 ssh_runner.go:195] Run: which crictl
	I0731 03:54:33.909033    5296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 03:54:33.924002    5296 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0731 03:54:33.924081    5296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 03:54:33.933383    5296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 03:54:33.949591    5296 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0731 03:54:33.949677    5296 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0731 03:54:33.950989    5296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 03:54:33.954893    5296 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 03:54:33.954943    5296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 03:54:33.964069    5296 docker.go:636] Got preloaded images: 
	I0731 03:54:33.964076    5296 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0731 03:54:33.964113    5296 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 03:54:33.967305    5296 ssh_runner.go:195] Run: which lz4
	I0731 03:54:33.968715    5296 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 03:54:33.969867    5296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 03:54:33.969879    5296 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0731 03:54:35.220247    5296 docker.go:600] Took 1.251578 seconds to copy over tarball
	I0731 03:54:35.220314    5296 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 03:54:36.256083    5296 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.035759208s)
	I0731 03:54:36.256096    5296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 03:54:36.271484    5296 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 03:54:36.274967    5296 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0731 03:54:36.280077    5296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 03:54:36.361789    5296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 03:54:37.999359    5296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.63755825s)
	I0731 03:54:37.999452    5296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 03:54:38.005666    5296 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 03:54:38.005676    5296 cache_images.go:84] Images are preloaded, skipping loading
	I0731 03:54:38.005742    5296 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 03:54:38.013401    5296 cni.go:84] Creating CNI manager for ""
	I0731 03:54:38.013409    5296 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 03:54:38.013443    5296 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 03:54:38.013458    5296 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.12 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-756000 NodeName:addons-756000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 03:54:38.013521    5296 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-756000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 03:54:38.013550    5296 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-756000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-756000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 03:54:38.013605    5296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0731 03:54:38.016497    5296 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 03:54:38.016528    5296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 03:54:38.019238    5296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0731 03:54:38.024289    5296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 03:54:38.028895    5296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0731 03:54:38.033536    5296 ssh_runner.go:195] Run: grep 192.168.105.12	control-plane.minikube.internal$ /etc/hosts
	I0731 03:54:38.034763    5296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 03:54:38.038737    5296 certs.go:56] Setting up /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000 for IP: 192.168.105.12
	I0731 03:54:38.038757    5296 certs.go:190] acquiring lock for shared ca certs: {Name:mk645bb5ce6691935288c693436a38a3c4bde2b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:38.038909    5296 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.key
	I0731 03:54:38.091889    5296 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt ...
	I0731 03:54:38.091893    5296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt: {Name:mk30896f1ecaac6be39011e5926fa8da1fc4c9af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:38.092069    5296 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.key ...
	I0731 03:54:38.092072    5296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.key: {Name:mk7566df23c5e692b7aecfcd910d797e50388e0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:38.092916    5296 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.key
	I0731 03:54:38.198587    5296 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.crt ...
	I0731 03:54:38.198591    5296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.crt: {Name:mk41ff18c40b99fe8278524a9443ff8bddc24bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:38.198725    5296 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.key ...
	I0731 03:54:38.198728    5296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.key: {Name:mk2e99ced8dbbbaff7eb668fa64e67b130dc5e88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:38.198857    5296 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/client.key
	I0731 03:54:38.198862    5296 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/client.crt with IP's: []
	I0731 03:54:38.283084    5296 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/client.crt ...
	I0731 03:54:38.283090    5296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/client.crt: {Name:mk2bad6c52bc8bc6d2a2b0343a79a383900eabe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:38.283273    5296 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/client.key ...
	I0731 03:54:38.283276    5296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/client.key: {Name:mk65b944c47483185062ceaad486b8a5d74a0a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:38.283374    5296 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/apiserver.key.3c7421d4
	I0731 03:54:38.283384    5296 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/apiserver.crt.3c7421d4 with IP's: [192.168.105.12 10.96.0.1 127.0.0.1 10.0.0.1]
	I0731 03:54:38.481520    5296 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/apiserver.crt.3c7421d4 ...
	I0731 03:54:38.481525    5296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/apiserver.crt.3c7421d4: {Name:mkc35a5f3dcf29e484524d4bd9e00fe02560bed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:38.481690    5296 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/apiserver.key.3c7421d4 ...
	I0731 03:54:38.481693    5296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/apiserver.key.3c7421d4: {Name:mk6975a028253885dcde3f2663988029b8857852 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:38.481793    5296 certs.go:337] copying /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/apiserver.crt.3c7421d4 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/apiserver.crt
	I0731 03:54:38.482056    5296 certs.go:341] copying /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/apiserver.key.3c7421d4 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/apiserver.key
	I0731 03:54:38.482138    5296 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/proxy-client.key
	I0731 03:54:38.482148    5296 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/proxy-client.crt with IP's: []
	I0731 03:54:38.570233    5296 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/proxy-client.crt ...
	I0731 03:54:38.570238    5296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/proxy-client.crt: {Name:mka1cd9754313cbc7f9fcae2572837d28c5afbc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:38.570388    5296 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/proxy-client.key ...
	I0731 03:54:38.570391    5296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/proxy-client.key: {Name:mk9384eddd1aea5b85c2afcafd87dd3378cd2716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:38.570652    5296 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 03:54:38.570676    5296 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem (1078 bytes)
	I0731 03:54:38.570697    5296 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem (1123 bytes)
	I0731 03:54:38.570719    5296 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/key.pem (1675 bytes)
	I0731 03:54:38.571076    5296 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 03:54:38.578723    5296 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 03:54:38.585694    5296 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 03:54:38.592152    5296 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/addons-756000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 03:54:38.598837    5296 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 03:54:38.606005    5296 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 03:54:38.612732    5296 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 03:54:38.619284    5296 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 03:54:38.626304    5296 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 03:54:38.633129    5296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 03:54:38.638694    5296 ssh_runner.go:195] Run: openssl version
	I0731 03:54:38.640697    5296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 03:54:38.643664    5296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 03:54:38.645151    5296 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 10:54 /usr/share/ca-certificates/minikubeCA.pem
	I0731 03:54:38.645170    5296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 03:54:38.646897    5296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 03:54:38.650207    5296 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 03:54:38.651489    5296 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 03:54:38.651526    5296 kubeadm.go:404] StartCluster: {Name:addons-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-756000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.12 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 03:54:38.651588    5296 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 03:54:38.656823    5296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 03:54:38.659741    5296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 03:54:38.662869    5296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 03:54:38.665984    5296 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 03:54:38.665999    5296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 03:54:38.688663    5296 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0731 03:54:38.688700    5296 kubeadm.go:322] [preflight] Running pre-flight checks
	I0731 03:54:38.739604    5296 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 03:54:38.739653    5296 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 03:54:38.739709    5296 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 03:54:38.800939    5296 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 03:54:38.805066    5296 out.go:204]   - Generating certificates and keys ...
	I0731 03:54:38.805129    5296 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0731 03:54:38.805165    5296 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0731 03:54:38.918111    5296 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 03:54:39.031599    5296 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0731 03:54:39.075840    5296 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0731 03:54:39.138515    5296 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0731 03:54:39.219766    5296 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0731 03:54:39.219826    5296 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-756000 localhost] and IPs [192.168.105.12 127.0.0.1 ::1]
	I0731 03:54:39.287889    5296 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0731 03:54:39.287956    5296 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-756000 localhost] and IPs [192.168.105.12 127.0.0.1 ::1]
	I0731 03:54:39.354752    5296 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 03:54:39.442142    5296 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 03:54:39.525846    5296 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0731 03:54:39.525873    5296 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 03:54:39.563499    5296 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 03:54:39.692360    5296 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 03:54:39.807752    5296 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 03:54:39.897411    5296 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 03:54:39.903988    5296 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 03:54:39.904070    5296 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 03:54:39.904087    5296 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0731 03:54:39.988619    5296 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 03:54:39.992581    5296 out.go:204]   - Booting up control plane ...
	I0731 03:54:39.992622    5296 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 03:54:39.992679    5296 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 03:54:39.992721    5296 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 03:54:39.992776    5296 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 03:54:39.994009    5296 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 03:54:44.497881    5296 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.503255 seconds
	I0731 03:54:44.498153    5296 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 03:54:44.514215    5296 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 03:54:45.026790    5296 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 03:54:45.026931    5296 kubeadm.go:322] [mark-control-plane] Marking the node addons-756000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 03:54:45.534942    5296 kubeadm.go:322] [bootstrap-token] Using token: 71q2pa.wln2z1o58p9nk2hx
	I0731 03:54:45.537914    5296 out.go:204]   - Configuring RBAC rules ...
	I0731 03:54:45.538008    5296 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 03:54:45.539438    5296 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 03:54:45.543633    5296 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 03:54:45.546118    5296 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 03:54:45.548796    5296 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 03:54:45.550482    5296 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 03:54:45.555934    5296 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 03:54:45.721953    5296 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0731 03:54:45.943034    5296 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0731 03:54:45.943386    5296 kubeadm.go:322] 
	I0731 03:54:45.943415    5296 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0731 03:54:45.943421    5296 kubeadm.go:322] 
	I0731 03:54:45.943459    5296 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0731 03:54:45.943465    5296 kubeadm.go:322] 
	I0731 03:54:45.943476    5296 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0731 03:54:45.943502    5296 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 03:54:45.943524    5296 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 03:54:45.943527    5296 kubeadm.go:322] 
	I0731 03:54:45.943570    5296 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0731 03:54:45.943575    5296 kubeadm.go:322] 
	I0731 03:54:45.943600    5296 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 03:54:45.943602    5296 kubeadm.go:322] 
	I0731 03:54:45.943634    5296 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0731 03:54:45.943674    5296 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 03:54:45.943712    5296 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 03:54:45.943714    5296 kubeadm.go:322] 
	I0731 03:54:45.943755    5296 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 03:54:45.943788    5296 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0731 03:54:45.943790    5296 kubeadm.go:322] 
	I0731 03:54:45.943823    5296 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 71q2pa.wln2z1o58p9nk2hx \
	I0731 03:54:45.943882    5296 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:92b0556b3b6f92a8481720bc11b5f636fd40e93120aabd046ff70f77047ec2aa \
	I0731 03:54:45.943893    5296 kubeadm.go:322] 	--control-plane 
	I0731 03:54:45.943896    5296 kubeadm.go:322] 
	I0731 03:54:45.943956    5296 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0731 03:54:45.943962    5296 kubeadm.go:322] 
	I0731 03:54:45.944030    5296 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 71q2pa.wln2z1o58p9nk2hx \
	I0731 03:54:45.944090    5296 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:92b0556b3b6f92a8481720bc11b5f636fd40e93120aabd046ff70f77047ec2aa 
	I0731 03:54:45.944149    5296 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 03:54:45.944157    5296 cni.go:84] Creating CNI manager for ""
	I0731 03:54:45.944164    5296 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 03:54:45.947676    5296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 03:54:45.954706    5296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 03:54:45.957761    5296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0731 03:54:45.963092    5296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 03:54:45.963153    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35 minikube.k8s.io/name=addons-756000 minikube.k8s.io/updated_at=2023_07_31T03_54_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:45.963153    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:46.022622    5296 ops.go:34] apiserver oom_adj: -16
	I0731 03:54:46.022659    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:46.058900    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:46.592878    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:47.092829    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:47.593169    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:48.092849    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:48.592909    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:49.092879    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:49.593117    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:50.093052    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:50.593078    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:51.093093    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:51.593061    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:52.093113    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:52.593062    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:53.091812    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:53.593022    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:54.092780    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:54.593015    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:55.093011    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:55.593129    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:56.091369    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:56.593059    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:57.092816    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:57.592806    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:58.092769    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:58.592735    5296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 03:54:58.630033    5296 kubeadm.go:1081] duration metric: took 12.6669655s to wait for elevateKubeSystemPrivileges.
	I0731 03:54:58.630048    5296 kubeadm.go:406] StartCluster complete in 19.978576666s
	I0731 03:54:58.630057    5296 settings.go:142] acquiring lock: {Name:mk7e2067b9c26be8d46dc95ba3a8a7ad946cadb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:58.630211    5296 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 03:54:58.630392    5296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/kubeconfig: {Name:mk98971837606256b8bab3d325e05dbfd512b496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:58.630590    5296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 03:54:58.630705    5296 config.go:182] Loaded profile config "addons-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 03:54:58.630697    5296 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0731 03:54:58.630762    5296 addons.go:69] Setting volumesnapshots=true in profile "addons-756000"
	I0731 03:54:58.630767    5296 addons.go:69] Setting ingress=true in profile "addons-756000"
	I0731 03:54:58.630771    5296 addons.go:231] Setting addon volumesnapshots=true in "addons-756000"
	I0731 03:54:58.630774    5296 addons.go:231] Setting addon ingress=true in "addons-756000"
	I0731 03:54:58.630792    5296 addons.go:69] Setting metrics-server=true in profile "addons-756000"
	I0731 03:54:58.630794    5296 host.go:66] Checking if "addons-756000" exists ...
	I0731 03:54:58.630799    5296 host.go:66] Checking if "addons-756000" exists ...
	I0731 03:54:58.630801    5296 addons.go:69] Setting ingress-dns=true in profile "addons-756000"
	I0731 03:54:58.630806    5296 addons.go:231] Setting addon ingress-dns=true in "addons-756000"
	I0731 03:54:58.630827    5296 host.go:66] Checking if "addons-756000" exists ...
	I0731 03:54:58.630834    5296 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-756000"
	I0731 03:54:58.630878    5296 addons.go:69] Setting default-storageclass=true in profile "addons-756000"
	I0731 03:54:58.630905    5296 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-756000"
	I0731 03:54:58.630915    5296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-756000"
	I0731 03:54:58.630930    5296 addons.go:69] Setting inspektor-gadget=true in profile "addons-756000"
	I0731 03:54:58.630980    5296 addons.go:231] Setting addon inspektor-gadget=true in "addons-756000"
	I0731 03:54:58.631005    5296 addons.go:69] Setting registry=true in profile "addons-756000"
	I0731 03:54:58.631020    5296 host.go:66] Checking if "addons-756000" exists ...
	I0731 03:54:58.631025    5296 addons.go:231] Setting addon registry=true in "addons-756000"
	I0731 03:54:58.631036    5296 host.go:66] Checking if "addons-756000" exists ...
	I0731 03:54:58.630799    5296 addons.go:231] Setting addon metrics-server=true in "addons-756000"
	I0731 03:54:58.631078    5296 host.go:66] Checking if "addons-756000" exists ...
	I0731 03:54:58.631087    5296 host.go:66] Checking if "addons-756000" exists ...
	W0731 03:54:58.631086    5296 host.go:54] host status for "addons-756000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/monitor: connect: connection refused
	W0731 03:54:58.631095    5296 addons.go:277] "addons-756000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0731 03:54:58.631098    5296 addons.go:467] Verifying addon ingress=true in "addons-756000"
	I0731 03:54:58.634951    5296 out.go:177] * Verifying ingress addon...
	I0731 03:54:58.630957    5296 addons.go:69] Setting gcp-auth=true in profile "addons-756000"
	I0731 03:54:58.630892    5296 addons.go:69] Setting cloud-spanner=true in profile "addons-756000"
	W0731 03:54:58.631283    5296 host.go:54] host status for "addons-756000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/monitor: connect: connection refused
	I0731 03:54:58.631062    5296 addons.go:69] Setting storage-provisioner=true in profile "addons-756000"
	W0731 03:54:58.631311    5296 host.go:54] host status for "addons-756000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/monitor: connect: connection refused
	W0731 03:54:58.631402    5296 host.go:54] host status for "addons-756000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/monitor: connect: connection refused
	W0731 03:54:58.631431    5296 host.go:54] host status for "addons-756000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/monitor: connect: connection refused
	W0731 03:54:58.631630    5296 host.go:54] host status for "addons-756000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/monitor: connect: connection refused
	I0731 03:54:58.643967    5296 addons.go:231] Setting addon storage-provisioner=true in "addons-756000"
	I0731 03:54:58.643987    5296 addons.go:231] Setting addon cloud-spanner=true in "addons-756000"
	W0731 03:54:58.643992    5296 addons.go:277] "addons-756000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0731 03:54:58.643996    5296 addons.go:277] "addons-756000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0731 03:54:58.644000    5296 addons_storage_classes.go:55] "addons-756000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0731 03:54:58.644003    5296 addons.go:277] "addons-756000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0731 03:54:58.644005    5296 addons.go:277] "addons-756000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	I0731 03:54:58.644014    5296 mustload.go:65] Loading cluster: addons-756000
	I0731 03:54:58.644406    5296 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 03:54:58.647922    5296 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0731 03:54:58.654018    5296 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-756000"
	I0731 03:54:58.656879    5296 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 03:54:58.656885    5296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 03:54:58.654022    5296 addons.go:231] Setting addon default-storageclass=true in "addons-756000"
	I0731 03:54:58.660907    5296 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 03:54:58.656893    5296 sshutil.go:53] new ssh client: &{IP:192.168.105.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/id_rsa Username:docker}
	I0731 03:54:58.654068    5296 addons.go:467] Verifying addon metrics-server=true in "addons-756000"
	I0731 03:54:58.654079    5296 host.go:66] Checking if "addons-756000" exists ...
	I0731 03:54:58.654112    5296 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 03:54:58.654115    5296 host.go:66] Checking if "addons-756000" exists ...
	I0731 03:54:58.654223    5296 config.go:182] Loaded profile config "addons-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 03:54:58.656902    5296 host.go:66] Checking if "addons-756000" exists ...
	I0731 03:54:58.654024    5296 addons.go:467] Verifying addon registry=true in "addons-756000"
	I0731 03:54:58.659686    5296 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-756000" context rescaled to 1 replicas
	I0731 03:54:58.660414    5296 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 03:54:58.675851    5296 out.go:177] * Verifying registry addon...
	I0731 03:54:58.668188    5296 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.12 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	W0731 03:54:58.668514    5296 host.go:54] host status for "addons-756000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/monitor: connect: connection refused
	I0731 03:54:58.668649    5296 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 03:54:58.672840    5296 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 03:54:58.679400    5296 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 03:54:58.684859    5296 out.go:177] 
	W0731 03:54:58.684863    5296 addons.go:277] "addons-756000" is not running, setting default-storageclass=true and skipping enablement (err=<nil>)
	I0731 03:54:58.688621    5296 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 03:54:58.691964    5296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 03:54:58.691967    5296 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	I0731 03:54:58.691972    5296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 03:54:58.695907    5296 sshutil.go:53] new ssh client: &{IP:192.168.105.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/id_rsa Username:docker}
	I0731 03:54:58.695952    5296 out.go:177] * Verifying Kubernetes components...
	I0731 03:54:58.699947    5296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 03:54:58.698002    5296 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	W0731 03:54:58.702832    5296 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/monitor: connect: connection refused
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/monitor: connect: connection refused
	W0731 03:54:58.702840    5296 out.go:239] * 
	* 
	I0731 03:54:58.698396    5296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 03:54:58.699960    5296 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 03:54:58.702918    5296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 03:54:58.702925    5296 sshutil.go:53] new ssh client: &{IP:192.168.105.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/id_rsa Username:docker}
	I0731 03:54:58.700001    5296 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0731 03:54:58.702947    5296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0731 03:54:58.702952    5296 sshutil.go:53] new ssh client: &{IP:192.168.105.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/addons-756000/id_rsa Username:docker}
	W0731 03:54:58.703391    5296 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 03:54:58.716832    5296 out.go:177] 
	I0731 03:54:58.707954    5296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 03:54:58.714214    5296 node_ready.go:35] waiting up to 6m0s for node "addons-756000" to be "Ready" ...

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-756000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (42.50s)

                                                
                                    
x
+
TestCertOptions (10.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-940000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-940000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.955749875s)

                                                
                                                
-- stdout --
	* [cert-options-940000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-940000 in cluster cert-options-940000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-940000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-940000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-940000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-940000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-940000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (82.379958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-940000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-940000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-940000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-940000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-940000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (38.9535ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-940000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-940000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-940000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-07-31 04:11:30.963847 -0700 PDT m=+1074.292507251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-940000 -n cert-options-940000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-940000 -n cert-options-940000: exit status 7 (29.07475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-940000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-940000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-940000
--- FAIL: TestCertOptions (10.24s)

                                                
                                    
x
+
TestCertExpiration (195.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-468000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-468000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.919982042s)

                                                
                                                
-- stdout --
	* [cert-expiration-468000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-468000 in cluster cert-expiration-468000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-468000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-468000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-468000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
E0731 04:11:30.145656    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-468000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-468000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.229964917s)

                                                
                                                
-- stdout --
	* [cert-expiration-468000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-468000 in cluster cert-expiration-468000
	* Restarting existing qemu2 VM for "cert-expiration-468000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-468000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-468000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-468000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-468000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-468000 in cluster cert-expiration-468000
	* Restarting existing qemu2 VM for "cert-expiration-468000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-468000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-468000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-07-31 04:14:30.953605 -0700 PDT m=+1254.286358043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-468000 -n cert-expiration-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-468000 -n cert-expiration-468000: exit status 7 (73.092584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-468000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-468000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-468000
--- FAIL: TestCertExpiration (195.38s)

                                                
                                    
x
+
TestDockerFlags (10.12s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-595000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-595000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.866467125s)

                                                
                                                
-- stdout --
	* [docker-flags-595000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-595000 in cluster docker-flags-595000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-595000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:11:10.762977    7199 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:11:10.763110    7199 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:11:10.763113    7199 out.go:309] Setting ErrFile to fd 2...
	I0731 04:11:10.763116    7199 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:11:10.763219    7199 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:11:10.764241    7199 out.go:303] Setting JSON to false
	I0731 04:11:10.779361    7199 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9641,"bootTime":1690792229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:11:10.779420    7199 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:11:10.788623    7199 out.go:177] * [docker-flags-595000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:11:10.792670    7199 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:11:10.792731    7199 notify.go:220] Checking for updates...
	I0731 04:11:10.795606    7199 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:11:10.798714    7199 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:11:10.801685    7199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:11:10.803057    7199 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:11:10.805705    7199 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:11:10.809047    7199 config.go:182] Loaded profile config "force-systemd-flag-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:11:10.809114    7199 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:11:10.809160    7199 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:11:10.813530    7199 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:11:10.820642    7199 start.go:298] selected driver: qemu2
	I0731 04:11:10.820647    7199 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:11:10.820660    7199 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:11:10.822567    7199 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:11:10.825671    7199 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:11:10.828695    7199 start_flags.go:914] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0731 04:11:10.828713    7199 cni.go:84] Creating CNI manager for ""
	I0731 04:11:10.828718    7199 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:11:10.828722    7199 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:11:10.828728    7199 start_flags.go:319] config:
	{Name:docker-flags-595000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:docker-flags-595000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0}
	I0731 04:11:10.832807    7199 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:11:10.838570    7199 out.go:177] * Starting control plane node docker-flags-595000 in cluster docker-flags-595000
	I0731 04:11:10.842666    7199 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:11:10.842691    7199 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:11:10.842705    7199 cache.go:57] Caching tarball of preloaded images
	I0731 04:11:10.842769    7199 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:11:10.842774    7199 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:11:10.842839    7199 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/docker-flags-595000/config.json ...
	I0731 04:11:10.842850    7199 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/docker-flags-595000/config.json: {Name:mk48868fc1eb3edf87d0a6efe85591918cba0317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:11:10.843056    7199 start.go:365] acquiring machines lock for docker-flags-595000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:11:10.843089    7199 start.go:369] acquired machines lock for "docker-flags-595000" in 23.542µs
	I0731 04:11:10.843100    7199 start.go:93] Provisioning new machine with config: &{Name:docker-flags-595000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:docker-flags-595000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:11:10.843131    7199 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:11:10.847755    7199 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 04:11:10.863761    7199 start.go:159] libmachine.API.Create for "docker-flags-595000" (driver="qemu2")
	I0731 04:11:10.863780    7199 client.go:168] LocalClient.Create starting
	I0731 04:11:10.863845    7199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:11:10.863864    7199 main.go:141] libmachine: Decoding PEM data...
	I0731 04:11:10.863875    7199 main.go:141] libmachine: Parsing certificate...
	I0731 04:11:10.863921    7199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:11:10.863935    7199 main.go:141] libmachine: Decoding PEM data...
	I0731 04:11:10.863942    7199 main.go:141] libmachine: Parsing certificate...
	I0731 04:11:10.864253    7199 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:11:10.983153    7199 main.go:141] libmachine: Creating SSH key...
	I0731 04:11:11.175834    7199 main.go:141] libmachine: Creating Disk image...
	I0731 04:11:11.175845    7199 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:11:11.176016    7199 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/disk.qcow2
	I0731 04:11:11.184913    7199 main.go:141] libmachine: STDOUT: 
	I0731 04:11:11.184934    7199 main.go:141] libmachine: STDERR: 
	I0731 04:11:11.184979    7199 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/disk.qcow2 +20000M
	I0731 04:11:11.192252    7199 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:11:11.192264    7199 main.go:141] libmachine: STDERR: 
	I0731 04:11:11.192284    7199 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/disk.qcow2
	I0731 04:11:11.192288    7199 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:11:11.192320    7199 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:0a:2f:35:2e:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/disk.qcow2
	I0731 04:11:11.193857    7199 main.go:141] libmachine: STDOUT: 
	I0731 04:11:11.193866    7199 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:11:11.193884    7199 client.go:171] LocalClient.Create took 330.107334ms
	I0731 04:11:13.196011    7199 start.go:128] duration metric: createHost completed in 2.35290175s
	I0731 04:11:13.196105    7199 start.go:83] releasing machines lock for "docker-flags-595000", held for 2.353031125s
	W0731 04:11:13.196190    7199 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:11:13.214315    7199 out.go:177] * Deleting "docker-flags-595000" in qemu2 ...
	W0731 04:11:13.233469    7199 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:11:13.233497    7199 start.go:687] Will try again in 5 seconds ...
	I0731 04:11:18.235534    7199 start.go:365] acquiring machines lock for docker-flags-595000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:11:18.235992    7199 start.go:369] acquired machines lock for "docker-flags-595000" in 358.625µs
	I0731 04:11:18.236100    7199 start.go:93] Provisioning new machine with config: &{Name:docker-flags-595000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:docker-flags-595000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:11:18.236400    7199 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:11:18.245571    7199 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 04:11:18.293286    7199 start.go:159] libmachine.API.Create for "docker-flags-595000" (driver="qemu2")
	I0731 04:11:18.293328    7199 client.go:168] LocalClient.Create starting
	I0731 04:11:18.293526    7199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:11:18.293618    7199 main.go:141] libmachine: Decoding PEM data...
	I0731 04:11:18.293634    7199 main.go:141] libmachine: Parsing certificate...
	I0731 04:11:18.293717    7199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:11:18.293747    7199 main.go:141] libmachine: Decoding PEM data...
	I0731 04:11:18.293762    7199 main.go:141] libmachine: Parsing certificate...
	I0731 04:11:18.294488    7199 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:11:18.425635    7199 main.go:141] libmachine: Creating SSH key...
	I0731 04:11:18.544547    7199 main.go:141] libmachine: Creating Disk image...
	I0731 04:11:18.544555    7199 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:11:18.544708    7199 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/disk.qcow2
	I0731 04:11:18.553231    7199 main.go:141] libmachine: STDOUT: 
	I0731 04:11:18.553242    7199 main.go:141] libmachine: STDERR: 
	I0731 04:11:18.553287    7199 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/disk.qcow2 +20000M
	I0731 04:11:18.560450    7199 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:11:18.560486    7199 main.go:141] libmachine: STDERR: 
	I0731 04:11:18.560496    7199 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/disk.qcow2
	I0731 04:11:18.560503    7199 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:11:18.560546    7199 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:e6:4a:38:cb:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/docker-flags-595000/disk.qcow2
	I0731 04:11:18.562110    7199 main.go:141] libmachine: STDOUT: 
	I0731 04:11:18.562120    7199 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:11:18.562131    7199 client.go:171] LocalClient.Create took 268.804125ms
	I0731 04:11:20.564260    7199 start.go:128] duration metric: createHost completed in 2.32787425s
	I0731 04:11:20.564309    7199 start.go:83] releasing machines lock for "docker-flags-595000", held for 2.32834425s
	W0731 04:11:20.564681    7199 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-595000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-595000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:11:20.573496    7199 out.go:177] 
	W0731 04:11:20.577509    7199 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:11:20.577533    7199 out.go:239] * 
	* 
	W0731 04:11:20.579997    7199 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:11:20.589445    7199 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-595000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-595000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-595000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (80.116334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-595000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-595000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-595000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-595000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-595000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-595000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (45.279667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-595000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-595000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-595000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-595000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-07-31 04:11:20.730957 -0700 PDT m=+1064.059385293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-595000 -n docker-flags-595000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-595000 -n docker-flags-595000: exit status 7 (28.161791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-595000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-595000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-595000
--- FAIL: TestDockerFlags (10.12s)

                                                
                                    
x
+
TestForceSystemdFlag (10.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-941000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-941000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.424578958s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-941000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-941000 in cluster force-systemd-flag-941000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-941000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:11:05.198484    7177 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:11:05.198603    7177 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:11:05.198607    7177 out.go:309] Setting ErrFile to fd 2...
	I0731 04:11:05.198609    7177 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:11:05.198718    7177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:11:05.199723    7177 out.go:303] Setting JSON to false
	I0731 04:11:05.214724    7177 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9636,"bootTime":1690792229,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:11:05.214797    7177 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:11:05.219726    7177 out.go:177] * [force-systemd-flag-941000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:11:05.226695    7177 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:11:05.230692    7177 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:11:05.226765    7177 notify.go:220] Checking for updates...
	I0731 04:11:05.237704    7177 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:11:05.240622    7177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:11:05.243695    7177 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:11:05.246701    7177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:11:05.250313    7177 config.go:182] Loaded profile config "force-systemd-env-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:11:05.250400    7177 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:11:05.250455    7177 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:11:05.254650    7177 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:11:05.260685    7177 start.go:298] selected driver: qemu2
	I0731 04:11:05.260691    7177 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:11:05.260698    7177 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:11:05.262728    7177 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:11:05.265670    7177 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:11:05.268788    7177 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 04:11:05.268808    7177 cni.go:84] Creating CNI manager for ""
	I0731 04:11:05.268823    7177 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:11:05.268826    7177 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:11:05.268832    7177 start_flags.go:319] config:
	{Name:force-systemd-flag-941000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-941000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:11:05.273020    7177 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:11:05.280645    7177 out.go:177] * Starting control plane node force-systemd-flag-941000 in cluster force-systemd-flag-941000
	I0731 04:11:05.284691    7177 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:11:05.284714    7177 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:11:05.284725    7177 cache.go:57] Caching tarball of preloaded images
	I0731 04:11:05.284782    7177 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:11:05.284787    7177 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:11:05.284849    7177 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/force-systemd-flag-941000/config.json ...
	I0731 04:11:05.284862    7177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/force-systemd-flag-941000/config.json: {Name:mk1aa64ebe8d19ae2bf766e8ea9fa496d1c42a10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:11:05.285083    7177 start.go:365] acquiring machines lock for force-systemd-flag-941000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:11:05.285125    7177 start.go:369] acquired machines lock for "force-systemd-flag-941000" in 34.125µs
	I0731 04:11:05.285138    7177 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:f
orce-systemd-flag-941000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:11:05.285167    7177 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:11:05.293658    7177 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 04:11:05.310255    7177 start.go:159] libmachine.API.Create for "force-systemd-flag-941000" (driver="qemu2")
	I0731 04:11:05.310276    7177 client.go:168] LocalClient.Create starting
	I0731 04:11:05.310374    7177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:11:05.310396    7177 main.go:141] libmachine: Decoding PEM data...
	I0731 04:11:05.310411    7177 main.go:141] libmachine: Parsing certificate...
	I0731 04:11:05.310460    7177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:11:05.310476    7177 main.go:141] libmachine: Decoding PEM data...
	I0731 04:11:05.310488    7177 main.go:141] libmachine: Parsing certificate...
	I0731 04:11:05.310833    7177 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:11:05.430011    7177 main.go:141] libmachine: Creating SSH key...
	I0731 04:11:05.489935    7177 main.go:141] libmachine: Creating Disk image...
	I0731 04:11:05.489941    7177 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:11:05.490084    7177 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/disk.qcow2
	I0731 04:11:05.498620    7177 main.go:141] libmachine: STDOUT: 
	I0731 04:11:05.498633    7177 main.go:141] libmachine: STDERR: 
	I0731 04:11:05.498679    7177 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/disk.qcow2 +20000M
	I0731 04:11:05.505729    7177 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:11:05.505741    7177 main.go:141] libmachine: STDERR: 
	I0731 04:11:05.505760    7177 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/disk.qcow2
	I0731 04:11:05.505768    7177 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:11:05.505809    7177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:cd:c9:0c:ba:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/disk.qcow2
	I0731 04:11:05.507327    7177 main.go:141] libmachine: STDOUT: 
	I0731 04:11:05.507339    7177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:11:05.507356    7177 client.go:171] LocalClient.Create took 197.08025ms
	I0731 04:11:07.509506    7177 start.go:128] duration metric: createHost completed in 2.224364333s
	I0731 04:11:07.509578    7177 start.go:83] releasing machines lock for "force-systemd-flag-941000", held for 2.224493s
	W0731 04:11:07.509654    7177 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:11:07.521149    7177 out.go:177] * Deleting "force-systemd-flag-941000" in qemu2 ...
	W0731 04:11:07.542179    7177 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:11:07.542204    7177 start.go:687] Will try again in 5 seconds ...
	I0731 04:11:12.544368    7177 start.go:365] acquiring machines lock for force-systemd-flag-941000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:11:13.196299    7177 start.go:369] acquired machines lock for "force-systemd-flag-941000" in 651.819459ms
	I0731 04:11:13.196403    7177 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:f
orce-systemd-flag-941000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:11:13.196733    7177 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:11:13.207385    7177 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 04:11:13.254791    7177 start.go:159] libmachine.API.Create for "force-systemd-flag-941000" (driver="qemu2")
	I0731 04:11:13.254839    7177 client.go:168] LocalClient.Create starting
	I0731 04:11:13.255056    7177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:11:13.255112    7177 main.go:141] libmachine: Decoding PEM data...
	I0731 04:11:13.255135    7177 main.go:141] libmachine: Parsing certificate...
	I0731 04:11:13.255224    7177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:11:13.255257    7177 main.go:141] libmachine: Decoding PEM data...
	I0731 04:11:13.255275    7177 main.go:141] libmachine: Parsing certificate...
	I0731 04:11:13.255932    7177 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:11:13.388488    7177 main.go:141] libmachine: Creating SSH key...
	I0731 04:11:13.537766    7177 main.go:141] libmachine: Creating Disk image...
	I0731 04:11:13.537772    7177 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:11:13.537975    7177 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/disk.qcow2
	I0731 04:11:13.546871    7177 main.go:141] libmachine: STDOUT: 
	I0731 04:11:13.546885    7177 main.go:141] libmachine: STDERR: 
	I0731 04:11:13.546946    7177 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/disk.qcow2 +20000M
	I0731 04:11:13.554088    7177 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:11:13.554103    7177 main.go:141] libmachine: STDERR: 
	I0731 04:11:13.554126    7177 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/disk.qcow2
	I0731 04:11:13.554132    7177 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:11:13.554165    7177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:f7:02:bb:a8:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-flag-941000/disk.qcow2
	I0731 04:11:13.555676    7177 main.go:141] libmachine: STDOUT: 
	I0731 04:11:13.555689    7177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:11:13.555701    7177 client.go:171] LocalClient.Create took 300.861834ms
	I0731 04:11:15.557843    7177 start.go:128] duration metric: createHost completed in 2.361120125s
	I0731 04:11:15.557933    7177 start.go:83] releasing machines lock for "force-systemd-flag-941000", held for 2.361642208s
	W0731 04:11:15.558325    7177 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-941000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-941000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:11:15.568980    7177 out.go:177] 
	W0731 04:11:15.571966    7177 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:11:15.572010    7177 out.go:239] * 
	* 
	W0731 04:11:15.573949    7177 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:11:15.582887    7177 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-941000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-941000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-941000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (79.199708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-941000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-941000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-07-31 04:11:15.678987 -0700 PDT m=+1059.007299585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-941000 -n force-systemd-flag-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-941000 -n force-systemd-flag-941000: exit status 7 (33.2255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-941000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-941000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-941000
--- FAIL: TestForceSystemdFlag (10.63s)

                                                
                                    
x
+
TestForceSystemdEnv (10.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-528000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-528000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.826854s)

                                                
                                                
-- stdout --
	* [force-systemd-env-528000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-528000 in cluster force-systemd-env-528000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-528000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:11:00.726212    7135 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:11:00.726342    7135 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:11:00.726345    7135 out.go:309] Setting ErrFile to fd 2...
	I0731 04:11:00.726348    7135 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:11:00.726472    7135 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:11:00.727577    7135 out.go:303] Setting JSON to false
	I0731 04:11:00.742942    7135 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9631,"bootTime":1690792229,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:11:00.743019    7135 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:11:00.748802    7135 out.go:177] * [force-systemd-env-528000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:11:00.759724    7135 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:11:00.755851    7135 notify.go:220] Checking for updates...
	I0731 04:11:00.767711    7135 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:11:00.775762    7135 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:11:00.782767    7135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:11:00.789768    7135 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:11:00.797636    7135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0731 04:11:00.802033    7135 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:11:00.802082    7135 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:11:00.804781    7135 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:11:00.811745    7135 start.go:298] selected driver: qemu2
	I0731 04:11:00.811749    7135 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:11:00.811755    7135 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:11:00.813638    7135 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:11:00.817730    7135 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:11:00.821829    7135 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 04:11:00.821849    7135 cni.go:84] Creating CNI manager for ""
	I0731 04:11:00.821856    7135 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:11:00.821862    7135 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:11:00.821867    7135 start_flags.go:319] config:
	{Name:force-systemd-env-528000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-env-528000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:11:00.826241    7135 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:11:00.833708    7135 out.go:177] * Starting control plane node force-systemd-env-528000 in cluster force-systemd-env-528000
	I0731 04:11:00.838799    7135 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:11:00.838820    7135 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:11:00.838829    7135 cache.go:57] Caching tarball of preloaded images
	I0731 04:11:00.838893    7135 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:11:00.838898    7135 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:11:00.838957    7135 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/force-systemd-env-528000/config.json ...
	I0731 04:11:00.838969    7135 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/force-systemd-env-528000/config.json: {Name:mkb5bba226a72a27f0160f752182ac8a03337911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:11:00.839142    7135 start.go:365] acquiring machines lock for force-systemd-env-528000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:11:00.839173    7135 start.go:369] acquired machines lock for "force-systemd-env-528000" in 23.25µs
	I0731 04:11:00.839185    7135 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:fo
rce-systemd-env-528000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:11:00.839218    7135 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:11:00.847716    7135 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 04:11:00.862713    7135 start.go:159] libmachine.API.Create for "force-systemd-env-528000" (driver="qemu2")
	I0731 04:11:00.862742    7135 client.go:168] LocalClient.Create starting
	I0731 04:11:00.862801    7135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:11:00.862820    7135 main.go:141] libmachine: Decoding PEM data...
	I0731 04:11:00.862830    7135 main.go:141] libmachine: Parsing certificate...
	I0731 04:11:00.862878    7135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:11:00.862891    7135 main.go:141] libmachine: Decoding PEM data...
	I0731 04:11:00.862898    7135 main.go:141] libmachine: Parsing certificate...
	I0731 04:11:00.863200    7135 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:11:01.032097    7135 main.go:141] libmachine: Creating SSH key...
	I0731 04:11:01.102584    7135 main.go:141] libmachine: Creating Disk image...
	I0731 04:11:01.102593    7135 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:11:01.102748    7135 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/disk.qcow2
	I0731 04:11:01.111632    7135 main.go:141] libmachine: STDOUT: 
	I0731 04:11:01.111654    7135 main.go:141] libmachine: STDERR: 
	I0731 04:11:01.111725    7135 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/disk.qcow2 +20000M
	I0731 04:11:01.119932    7135 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:11:01.119948    7135 main.go:141] libmachine: STDERR: 
	I0731 04:11:01.119966    7135 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/disk.qcow2
	I0731 04:11:01.119971    7135 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:11:01.120011    7135 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:cc:a1:d6:31:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/disk.qcow2
	I0731 04:11:01.121622    7135 main.go:141] libmachine: STDOUT: 
	I0731 04:11:01.121638    7135 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:11:01.121654    7135 client.go:171] LocalClient.Create took 258.912583ms
	I0731 04:11:03.124087    7135 start.go:128] duration metric: createHost completed in 2.284864291s
	I0731 04:11:03.124180    7135 start.go:83] releasing machines lock for "force-systemd-env-528000", held for 2.285047791s
	W0731 04:11:03.124239    7135 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:11:03.132508    7135 out.go:177] * Deleting "force-systemd-env-528000" in qemu2 ...
	W0731 04:11:03.157265    7135 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:11:03.157301    7135 start.go:687] Will try again in 5 seconds ...
	I0731 04:11:08.157959    7135 start.go:365] acquiring machines lock for force-systemd-env-528000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:11:08.158494    7135 start.go:369] acquired machines lock for "force-systemd-env-528000" in 387.542µs
	I0731 04:11:08.158593    7135 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:fo
rce-systemd-env-528000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:11:08.158866    7135 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:11:08.168289    7135 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 04:11:08.215736    7135 start.go:159] libmachine.API.Create for "force-systemd-env-528000" (driver="qemu2")
	I0731 04:11:08.215803    7135 client.go:168] LocalClient.Create starting
	I0731 04:11:08.215957    7135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:11:08.216003    7135 main.go:141] libmachine: Decoding PEM data...
	I0731 04:11:08.216024    7135 main.go:141] libmachine: Parsing certificate...
	I0731 04:11:08.216101    7135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:11:08.216129    7135 main.go:141] libmachine: Decoding PEM data...
	I0731 04:11:08.216153    7135 main.go:141] libmachine: Parsing certificate...
	I0731 04:11:08.216631    7135 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:11:08.346445    7135 main.go:141] libmachine: Creating SSH key...
	I0731 04:11:08.467367    7135 main.go:141] libmachine: Creating Disk image...
	I0731 04:11:08.467376    7135 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:11:08.467601    7135 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/disk.qcow2
	I0731 04:11:08.476072    7135 main.go:141] libmachine: STDOUT: 
	I0731 04:11:08.476083    7135 main.go:141] libmachine: STDERR: 
	I0731 04:11:08.476134    7135 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/disk.qcow2 +20000M
	I0731 04:11:08.483209    7135 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:11:08.483219    7135 main.go:141] libmachine: STDERR: 
	I0731 04:11:08.483230    7135 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/disk.qcow2
	I0731 04:11:08.483235    7135 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:11:08.483280    7135 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:25:0e:28:c9:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/force-systemd-env-528000/disk.qcow2
	I0731 04:11:08.484821    7135 main.go:141] libmachine: STDOUT: 
	I0731 04:11:08.484831    7135 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:11:08.484842    7135 client.go:171] LocalClient.Create took 269.039291ms
	I0731 04:11:10.486984    7135 start.go:128] duration metric: createHost completed in 2.328148666s
	I0731 04:11:10.487073    7135 start.go:83] releasing machines lock for "force-systemd-env-528000", held for 2.328572125s
	W0731 04:11:10.487463    7135 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-528000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-528000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:11:10.496092    7135 out.go:177] 
	W0731 04:11:10.500073    7135 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:11:10.500098    7135 out.go:239] * 
	* 
	W0731 04:11:10.502611    7135 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:11:10.512053    7135 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-528000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-528000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-528000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (78.687333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-528000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-528000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-07-31 04:11:10.607613 -0700 PDT m=+1053.935811043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-528000 -n force-systemd-env-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-528000 -n force-systemd-env-528000: exit status 7 (32.640042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-528000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-528000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-528000
--- FAIL: TestForceSystemdEnv (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (31.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-652000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-652000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-5pg7d" [83b14538-36e7-4598-afb6-9149c3fff7d9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-5pg7d" [83b14538-36e7-4598-afb6-9149c3fff7d9] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.021062167s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.14:30892
functional_test.go:1660: error fetching http://192.168.105.14:30892: Get "http://192.168.105.14:30892": dial tcp 192.168.105.14:30892: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.14:30892: Get "http://192.168.105.14:30892": dial tcp 192.168.105.14:30892: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.14:30892: Get "http://192.168.105.14:30892": dial tcp 192.168.105.14:30892: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.14:30892: Get "http://192.168.105.14:30892": dial tcp 192.168.105.14:30892: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.14:30892: Get "http://192.168.105.14:30892": dial tcp 192.168.105.14:30892: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.14:30892: Get "http://192.168.105.14:30892": dial tcp 192.168.105.14:30892: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.14:30892: Get "http://192.168.105.14:30892": dial tcp 192.168.105.14:30892: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.14:30892: Get "http://192.168.105.14:30892": dial tcp 192.168.105.14:30892: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-652000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-58d66798bb-5pg7d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-652000/192.168.105.14
Start Time:       Mon, 31 Jul 2023 04:02:07 -0700
Labels:           app=hello-node-connect
pod-template-hash=58d66798bb
Annotations:      <none>
Status:           Running
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-58d66798bb
Containers:
echoserver-arm:
Container ID:   docker://96556b3ee14af7dd1ceae7b43e5ff8a7f5f55efd688d01ba50651b9a64c738b7
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 31 Jul 2023 04:02:28 -0700
Finished:     Mon, 31 Jul 2023 04:02:28 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 31 Jul 2023 04:02:14 -0700
Finished:     Mon, 31 Jul 2023 04:02:14 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4zg2g (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-4zg2g:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  31s                default-scheduler  Successfully assigned default/hello-node-connect-58d66798bb-5pg7d to functional-652000
Normal   Pulling    30s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     25s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 4.90538376s (4.905392511s including waiting)
Normal   Created    10s (x3 over 25s)  kubelet            Created container echoserver-arm
Normal   Started    10s (x3 over 25s)  kubelet            Started container echoserver-arm
Normal   Pulled     10s (x2 over 25s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    9s (x3 over 23s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-58d66798bb-5pg7d_default(83b14538-36e7-4598-afb6-9149c3fff7d9)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-652000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-652000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.199.252
IPs:                      10.100.199.252
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30892/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-652000 -n functional-652000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service | functional-652000                                                                                                    | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-652000 service                                                                                            | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| mount   | -p functional-652000                                                                                                 | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1295519227/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh findmnt                                                                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh findmnt                                                                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh -- ls                                                                                          | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh cat                                                                                            | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | /mount-9p/test-1690801348818439000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh stat                                                                                           | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh stat                                                                                           | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh sudo                                                                                           | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh findmnt                                                                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-652000                                                                                                 | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2575899690/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh findmnt                                                                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh -- ls                                                                                          | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh sudo                                                                                           | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-652000                                                                                                 | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2232768490/001:/mount2   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-652000                                                                                                 | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2232768490/001:/mount3   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-652000                                                                                                 | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2232768490/001:/mount1   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh findmnt                                                                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh findmnt                                                                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh findmnt                                                                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh findmnt                                                                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT |                     |
	|         | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh findmnt                                                                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh findmnt                                                                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-652000 ssh findmnt                                                                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT |                     |
	|         | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 03:57:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 03:57:12.008349    5599 out.go:296] Setting OutFile to fd 1 ...
	I0731 03:57:12.008493    5599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 03:57:12.008494    5599 out.go:309] Setting ErrFile to fd 2...
	I0731 03:57:12.008496    5599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 03:57:12.008615    5599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 03:57:12.009822    5599 out.go:303] Setting JSON to false
	I0731 03:57:12.025810    5599 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8803,"bootTime":1690792229,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 03:57:12.025874    5599 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 03:57:12.030295    5599 out.go:177] * [functional-652000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 03:57:12.036363    5599 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 03:57:12.036354    5599 notify.go:220] Checking for updates...
	I0731 03:57:12.040305    5599 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 03:57:12.044393    5599 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 03:57:12.048305    5599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 03:57:12.051350    5599 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 03:57:12.054298    5599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 03:57:12.057446    5599 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 03:57:12.057494    5599 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 03:57:12.062310    5599 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 03:57:12.069269    5599 start.go:298] selected driver: qemu2
	I0731 03:57:12.069271    5599 start.go:898] validating driver "qemu2" against &{Name:functional-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-6
52000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.14 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 03:57:12.069327    5599 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 03:57:12.071067    5599 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 03:57:12.071086    5599 cni.go:84] Creating CNI manager for ""
	I0731 03:57:12.071094    5599 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 03:57:12.071101    5599 start_flags.go:319] config:
	{Name:functional-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-652000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.14 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 03:57:12.074812    5599 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 03:57:12.082153    5599 out.go:177] * Starting control plane node functional-652000 in cluster functional-652000
	I0731 03:57:12.086288    5599 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 03:57:12.086306    5599 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 03:57:12.086317    5599 cache.go:57] Caching tarball of preloaded images
	I0731 03:57:12.086388    5599 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 03:57:12.086392    5599 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 03:57:12.086476    5599 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/config.json ...
	I0731 03:57:12.086797    5599 start.go:365] acquiring machines lock for functional-652000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 03:57:12.086824    5599 start.go:369] acquired machines lock for "functional-652000" in 23.084µs
	I0731 03:57:12.086831    5599 start.go:96] Skipping create...Using existing machine configuration
	I0731 03:57:12.086834    5599 fix.go:54] fixHost starting: 
	I0731 03:57:12.087385    5599 fix.go:102] recreateIfNeeded on functional-652000: state=Running err=<nil>
	W0731 03:57:12.087392    5599 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 03:57:12.090283    5599 out.go:177] * Updating the running qemu2 "functional-652000" VM ...
	I0731 03:57:12.098264    5599 machine.go:88] provisioning docker machine ...
	I0731 03:57:12.098274    5599 buildroot.go:166] provisioning hostname "functional-652000"
	I0731 03:57:12.098308    5599 main.go:141] libmachine: Using SSH client type: native
	I0731 03:57:12.098541    5599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d3d170] 0x102d3fbd0 <nil>  [] 0s} 192.168.105.14 22 <nil> <nil>}
	I0731 03:57:12.098545    5599 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-652000 && echo "functional-652000" | sudo tee /etc/hostname
	I0731 03:57:12.163468    5599 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-652000
	
	I0731 03:57:12.163509    5599 main.go:141] libmachine: Using SSH client type: native
	I0731 03:57:12.163739    5599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d3d170] 0x102d3fbd0 <nil>  [] 0s} 192.168.105.14 22 <nil> <nil>}
	I0731 03:57:12.163746    5599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-652000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-652000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-652000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 03:57:12.228674    5599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 03:57:12.228681    5599 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16968-4815/.minikube CaCertPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16968-4815/.minikube}
	I0731 03:57:12.228687    5599 buildroot.go:174] setting up certificates
	I0731 03:57:12.228693    5599 provision.go:83] configureAuth start
	I0731 03:57:12.228696    5599 provision.go:138] copyHostCerts
	I0731 03:57:12.228756    5599 exec_runner.go:144] found /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.pem, removing ...
	I0731 03:57:12.228760    5599 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.pem
	I0731 03:57:12.228868    5599 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.pem (1078 bytes)
	I0731 03:57:12.229035    5599 exec_runner.go:144] found /Users/jenkins/minikube-integration/16968-4815/.minikube/cert.pem, removing ...
	I0731 03:57:12.229036    5599 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16968-4815/.minikube/cert.pem
	I0731 03:57:12.229108    5599 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16968-4815/.minikube/cert.pem (1123 bytes)
	I0731 03:57:12.229225    5599 exec_runner.go:144] found /Users/jenkins/minikube-integration/16968-4815/.minikube/key.pem, removing ...
	I0731 03:57:12.229227    5599 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16968-4815/.minikube/key.pem
	I0731 03:57:12.229315    5599 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16968-4815/.minikube/key.pem (1675 bytes)
	I0731 03:57:12.229397    5599 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca-key.pem org=jenkins.functional-652000 san=[192.168.105.14 192.168.105.14 localhost 127.0.0.1 minikube functional-652000]
	I0731 03:57:12.423933    5599 provision.go:172] copyRemoteCerts
	I0731 03:57:12.423981    5599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 03:57:12.423990    5599 sshutil.go:53] new ssh client: &{IP:192.168.105.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/functional-652000/id_rsa Username:docker}
	I0731 03:57:12.458635    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0731 03:57:12.466159    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 03:57:12.473353    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 03:57:12.480421    5599 provision.go:86] duration metric: configureAuth took 251.723459ms
	I0731 03:57:12.480427    5599 buildroot.go:189] setting minikube options for container-runtime
	I0731 03:57:12.480526    5599 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 03:57:12.480563    5599 main.go:141] libmachine: Using SSH client type: native
	I0731 03:57:12.480783    5599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d3d170] 0x102d3fbd0 <nil>  [] 0s} 192.168.105.14 22 <nil> <nil>}
	I0731 03:57:12.480786    5599 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 03:57:12.544259    5599 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 03:57:12.544263    5599 buildroot.go:70] root file system type: tmpfs
	I0731 03:57:12.544313    5599 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 03:57:12.544353    5599 main.go:141] libmachine: Using SSH client type: native
	I0731 03:57:12.544573    5599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d3d170] 0x102d3fbd0 <nil>  [] 0s} 192.168.105.14 22 <nil> <nil>}
	I0731 03:57:12.544605    5599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 03:57:12.609365    5599 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 03:57:12.609412    5599 main.go:141] libmachine: Using SSH client type: native
	I0731 03:57:12.609636    5599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d3d170] 0x102d3fbd0 <nil>  [] 0s} 192.168.105.14 22 <nil> <nil>}
	I0731 03:57:12.609643    5599 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 03:57:12.673658    5599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 03:57:12.673666    5599 machine.go:91] provisioned docker machine in 575.400666ms
	I0731 03:57:12.673670    5599 start.go:300] post-start starting for "functional-652000" (driver="qemu2")
	I0731 03:57:12.673675    5599 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 03:57:12.673740    5599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 03:57:12.673747    5599 sshutil.go:53] new ssh client: &{IP:192.168.105.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/functional-652000/id_rsa Username:docker}
	I0731 03:57:12.709900    5599 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 03:57:12.711337    5599 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 03:57:12.711342    5599 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16968-4815/.minikube/addons for local assets ...
	I0731 03:57:12.711399    5599 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16968-4815/.minikube/files for local assets ...
	I0731 03:57:12.711498    5599 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem -> 52232.pem in /etc/ssl/certs
	I0731 03:57:12.711600    5599 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/test/nested/copy/5223/hosts -> hosts in /etc/test/nested/copy/5223
	I0731 03:57:12.711628    5599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/5223
	I0731 03:57:12.714381    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem --> /etc/ssl/certs/52232.pem (1708 bytes)
	I0731 03:57:12.721416    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/test/nested/copy/5223/hosts --> /etc/test/nested/copy/5223/hosts (40 bytes)
	I0731 03:57:12.728186    5599 start.go:303] post-start completed in 54.511584ms
	I0731 03:57:12.728191    5599 fix.go:56] fixHost completed within 641.359375ms
	I0731 03:57:12.728239    5599 main.go:141] libmachine: Using SSH client type: native
	I0731 03:57:12.728474    5599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d3d170] 0x102d3fbd0 <nil>  [] 0s} 192.168.105.14 22 <nil> <nil>}
	I0731 03:57:12.728477    5599 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 03:57:12.789524    5599 main.go:141] libmachine: SSH cmd err, output: <nil>: 1690801032.763194091
	
	I0731 03:57:12.789528    5599 fix.go:206] guest clock: 1690801032.763194091
	I0731 03:57:12.789531    5599 fix.go:219] Guest: 2023-07-31 03:57:12.763194091 -0700 PDT Remote: 2023-07-31 03:57:12.728192 -0700 PDT m=+0.739601209 (delta=35.002091ms)
	I0731 03:57:12.789539    5599 fix.go:190] guest clock delta is within tolerance: 35.002091ms
	I0731 03:57:12.789542    5599 start.go:83] releasing machines lock for "functional-652000", held for 702.716291ms
	I0731 03:57:12.789835    5599 ssh_runner.go:195] Run: cat /version.json
	I0731 03:57:12.789840    5599 sshutil.go:53] new ssh client: &{IP:192.168.105.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/functional-652000/id_rsa Username:docker}
	I0731 03:57:12.789854    5599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 03:57:12.789875    5599 sshutil.go:53] new ssh client: &{IP:192.168.105.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/functional-652000/id_rsa Username:docker}
	I0731 03:57:12.824377    5599 ssh_runner.go:195] Run: systemctl --version
	I0731 03:57:12.864527    5599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 03:57:12.866243    5599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 03:57:12.866269    5599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 03:57:12.868797    5599 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 03:57:12.868801    5599 start.go:466] detecting cgroup driver to use...
	I0731 03:57:12.868885    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 03:57:12.874473    5599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0731 03:57:12.877796    5599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 03:57:12.880884    5599 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 03:57:12.880905    5599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 03:57:12.883850    5599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 03:57:12.887534    5599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 03:57:12.891130    5599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 03:57:12.894638    5599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 03:57:12.897837    5599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 03:57:12.900759    5599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 03:57:12.903404    5599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 03:57:12.906906    5599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 03:57:13.005800    5599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 03:57:13.013685    5599 start.go:466] detecting cgroup driver to use...
	I0731 03:57:13.013737    5599 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 03:57:13.018977    5599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 03:57:13.023753    5599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 03:57:13.031895    5599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 03:57:13.037012    5599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 03:57:13.041666    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 03:57:13.046747    5599 ssh_runner.go:195] Run: which cri-dockerd
	I0731 03:57:13.047987    5599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 03:57:13.051052    5599 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 03:57:13.056368    5599 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 03:57:13.150811    5599 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 03:57:13.257459    5599 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 03:57:13.257470    5599 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0731 03:57:13.262751    5599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 03:57:13.357146    5599 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 03:57:24.777162    5599 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.420034708s)
	I0731 03:57:24.777227    5599 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 03:57:24.862600    5599 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 03:57:24.944598    5599 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 03:57:25.027649    5599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 03:57:25.106951    5599 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 03:57:25.114197    5599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 03:57:25.192742    5599 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0731 03:57:25.219100    5599 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 03:57:25.219182    5599 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 03:57:25.221319    5599 start.go:534] Will wait 60s for crictl version
	I0731 03:57:25.221357    5599 ssh_runner.go:195] Run: which crictl
	I0731 03:57:25.222779    5599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 03:57:25.234664    5599 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0731 03:57:25.234738    5599 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 03:57:25.242395    5599 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 03:57:25.254332    5599 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0731 03:57:25.254487    5599 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0731 03:57:25.259266    5599 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0731 03:57:25.260835    5599 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 03:57:25.260898    5599 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 03:57:25.266675    5599 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-652000
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0731 03:57:25.266683    5599 docker.go:566] Images already preloaded, skipping extraction
	I0731 03:57:25.266725    5599 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 03:57:25.272338    5599 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-652000
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0731 03:57:25.272345    5599 cache_images.go:84] Images are preloaded, skipping loading
	I0731 03:57:25.272382    5599 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 03:57:25.279484    5599 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0731 03:57:25.279498    5599 cni.go:84] Creating CNI manager for ""
	I0731 03:57:25.279502    5599 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 03:57:25.279506    5599 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 03:57:25.279514    5599 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.14 APIServerPort:8441 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-652000 NodeName:functional-652000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 03:57:25.279574    5599 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.14
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-652000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 03:57:25.279603    5599 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-652000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:functional-652000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0731 03:57:25.279650    5599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0731 03:57:25.282426    5599 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 03:57:25.282455    5599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 03:57:25.285527    5599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 03:57:25.290844    5599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 03:57:25.295640    5599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1956 bytes)
	I0731 03:57:25.300216    5599 ssh_runner.go:195] Run: grep 192.168.105.14	control-plane.minikube.internal$ /etc/hosts
	I0731 03:57:25.301640    5599 certs.go:56] Setting up /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000 for IP: 192.168.105.14
	I0731 03:57:25.301648    5599 certs.go:190] acquiring lock for shared ca certs: {Name:mk645bb5ce6691935288c693436a38a3c4bde2b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:57:25.301775    5599 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.key
	I0731 03:57:25.301812    5599 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.key
	I0731 03:57:25.301864    5599 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.key
	I0731 03:57:25.301904    5599 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/apiserver.key.4d0fe398
	I0731 03:57:25.301936    5599 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/proxy-client.key
	I0731 03:57:25.302085    5599 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/5223.pem (1338 bytes)
	W0731 03:57:25.302108    5599 certs.go:433] ignoring /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/5223_empty.pem, impossibly tiny 0 bytes
	I0731 03:57:25.302113    5599 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 03:57:25.302134    5599 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem (1078 bytes)
	I0731 03:57:25.302151    5599 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem (1123 bytes)
	I0731 03:57:25.302168    5599 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/key.pem (1675 bytes)
	I0731 03:57:25.302208    5599 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem (1708 bytes)
	I0731 03:57:25.302520    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 03:57:25.309578    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 03:57:25.316937    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 03:57:25.324322    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 03:57:25.331401    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 03:57:25.338073    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 03:57:25.345458    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 03:57:25.353221    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 03:57:25.360790    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 03:57:25.367998    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/5223.pem --> /usr/share/ca-certificates/5223.pem (1338 bytes)
	I0731 03:57:25.374649    5599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem --> /usr/share/ca-certificates/52232.pem (1708 bytes)
	I0731 03:57:25.381686    5599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 03:57:25.387046    5599 ssh_runner.go:195] Run: openssl version
	I0731 03:57:25.388818    5599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/52232.pem && ln -fs /usr/share/ca-certificates/52232.pem /etc/ssl/certs/52232.pem"
	I0731 03:57:25.391851    5599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/52232.pem
	I0731 03:57:25.393341    5599 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 31 10:55 /usr/share/ca-certificates/52232.pem
	I0731 03:57:25.393358    5599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/52232.pem
	I0731 03:57:25.395196    5599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/52232.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 03:57:25.398081    5599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 03:57:25.401767    5599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 03:57:25.403400    5599 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 10:54 /usr/share/ca-certificates/minikubeCA.pem
	I0731 03:57:25.403421    5599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 03:57:25.405411    5599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 03:57:25.408507    5599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5223.pem && ln -fs /usr/share/ca-certificates/5223.pem /etc/ssl/certs/5223.pem"
	I0731 03:57:25.411393    5599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5223.pem
	I0731 03:57:25.412914    5599 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 31 10:55 /usr/share/ca-certificates/5223.pem
	I0731 03:57:25.412931    5599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5223.pem
	I0731 03:57:25.415012    5599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5223.pem /etc/ssl/certs/51391683.0"
	I0731 03:57:25.418030    5599 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 03:57:25.419436    5599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 03:57:25.421247    5599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 03:57:25.423230    5599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 03:57:25.425004    5599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 03:57:25.426952    5599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 03:57:25.428716    5599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 03:57:25.430658    5599 kubeadm.go:404] StartCluster: {Name:functional-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-652000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.14 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 03:57:25.430727    5599 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 03:57:25.439412    5599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 03:57:25.442428    5599 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0731 03:57:25.442431    5599 kubeadm.go:636] restartCluster start
	I0731 03:57:25.442452    5599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 03:57:25.445445    5599 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 03:57:25.445728    5599 kubeconfig.go:92] found "functional-652000" server: "https://192.168.105.14:8441"
	I0731 03:57:25.446488    5599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 03:57:25.449620    5599 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.14"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0731 03:57:25.449623    5599 kubeadm.go:1128] stopping kube-system containers ...
	I0731 03:57:25.449657    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 03:57:25.456659    5599 docker.go:462] Stopping containers: [0430a68e15f9 fdc896c688a8 45f684e22cf9 669e0f6bc8c8 54adeabd1c3f ea4f27150ef2 99a8476cf634 245591b784b3 99026461e46e 9ec05d2bb144 148716c0cb70 9ec3620c07d1 4f15529acf87 1298564f14b8 7261b79191b5 372a82f96bcc 77106025cb97 844f5ba3e3b5 f48c8faa509c 4c9b7b6e9558 37870ecd7d5a 81d996eca0be 6a8ad18e2667 d2b57d65118b adc19a95c674 eec468551de6 2effc4b21643 95a9656dd619]
	I0731 03:57:25.456709    5599 ssh_runner.go:195] Run: docker stop 0430a68e15f9 fdc896c688a8 45f684e22cf9 669e0f6bc8c8 54adeabd1c3f ea4f27150ef2 99a8476cf634 245591b784b3 99026461e46e 9ec05d2bb144 148716c0cb70 9ec3620c07d1 4f15529acf87 1298564f14b8 7261b79191b5 372a82f96bcc 77106025cb97 844f5ba3e3b5 f48c8faa509c 4c9b7b6e9558 37870ecd7d5a 81d996eca0be 6a8ad18e2667 d2b57d65118b adc19a95c674 eec468551de6 2effc4b21643 95a9656dd619
	I0731 03:57:25.462671    5599 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 03:57:25.556545    5599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 03:57:25.561192    5599 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 31 10:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jul 31 10:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul 31 10:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jul 31 10:56 /etc/kubernetes/scheduler.conf
	
	I0731 03:57:25.561217    5599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0731 03:57:25.565258    5599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0731 03:57:25.569127    5599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0731 03:57:25.572666    5599 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 03:57:25.572687    5599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 03:57:25.575922    5599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0731 03:57:25.578882    5599 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 03:57:25.578907    5599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 03:57:25.581970    5599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 03:57:25.585310    5599 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0731 03:57:25.585313    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 03:57:25.606873    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 03:57:26.172633    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 03:57:26.292527    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 03:57:26.339754    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 03:57:26.363729    5599 api_server.go:52] waiting for apiserver process to appear ...
	I0731 03:57:26.363785    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:26.367687    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:26.873196    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:27.373239    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:27.873343    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:28.373541    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:28.873769    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:29.372249    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:29.873536    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:30.373521    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:30.873554    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:31.372653    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:31.873570    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:32.373471    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:32.873546    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:33.373340    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:33.872715    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:34.373420    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:34.873524    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:35.373484    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:35.871714    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:36.371641    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:36.873480    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:37.373571    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:37.873389    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:38.373470    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:38.873520    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:39.373554    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:39.873535    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:40.373492    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:40.873546    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:41.371489    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:41.873523    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:42.373467    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:42.873465    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:43.373420    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:43.872913    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:44.373444    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:44.871567    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:45.373216    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:45.873328    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:46.371644    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:46.873411    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:47.373440    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:47.873284    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:48.373218    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:48.873314    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:49.373393    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:49.873442    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:50.373447    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:50.873434    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:51.372737    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:51.873449    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:52.373382    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:52.872568    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:53.373503    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:53.873546    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:54.373496    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:54.873493    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:55.373461    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:55.873471    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:56.371379    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:56.873403    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:57.373501    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:57.873425    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:58.373494    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:58.873364    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:59.373462    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:57:59.873410    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:00.373475    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:00.873466    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:01.372482    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:01.873450    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:02.373388    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:02.873526    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:03.373488    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:03.873595    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:04.373471    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:04.873384    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:05.373467    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:05.871690    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:06.373410    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:06.873382    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:07.373392    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:07.873362    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:08.373298    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:08.873477    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:09.373408    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:09.873327    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:10.373389    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:10.873296    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:11.373363    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:11.873346    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:12.373359    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:12.873350    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:13.373352    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:13.873414    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:14.373377    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:14.873308    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:15.373448    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:15.873276    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:16.373389    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:16.873381    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:17.373307    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:17.872358    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:18.373358    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:18.873390    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:19.373384    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:19.873121    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:20.373241    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:20.873310    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:21.373349    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:21.873390    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:22.373374    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:22.873336    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:23.373339    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:23.873471    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:24.373368    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:24.873310    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:25.373395    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:25.873269    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:26.373480    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:58:26.400696    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:58:26.400841    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:58:26.416208    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:58:26.416346    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:58:26.427907    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:58:26.428001    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:58:26.452957    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:58:26.453049    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:58:26.462281    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:58:26.462345    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:58:26.468296    5599 logs.go:284] 2 containers: [af2f059d1432 c8bc3e1d4d6f]
	I0731 03:58:26.468346    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:58:26.473773    5599 logs.go:284] 0 containers: []
	W0731 03:58:26.473778    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:58:26.473830    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:58:26.479146    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:58:26.479156    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:58:26.479160    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:58:26.483933    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:58:26.483936    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:58:26.493787    5599 logs.go:123] Gathering logs for kube-controller-manager [c8bc3e1d4d6f] ...
	I0731 03:58:26.493792    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8bc3e1d4d6f"
	I0731 03:58:26.500319    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:58:26.500325    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:58:26.512750    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:58:26.512755    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:58:26.519798    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:58:26.519803    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:58:26.527539    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:58:26.527547    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:58:26.534338    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:58:26.534342    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:58:26.540787    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:58:26.540792    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:58:26.570460    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:58:26.570466    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:58:26.616577    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:58:26.616581    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:58:26.642031    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:58:26.642041    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:58:26.642045    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:58:26.651565    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:58:26.651571    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:58:26.659612    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:58:26.659618    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:58:26.680561    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:58:26.680568    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:58:29.194304    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:29.211501    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:58:29.228251    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:58:29.228355    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:58:29.243042    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:58:29.243117    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:58:29.252672    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:58:29.252739    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:58:29.260716    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:58:29.260774    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:58:29.268179    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:58:29.268232    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:58:29.274715    5599 logs.go:284] 2 containers: [af2f059d1432 c8bc3e1d4d6f]
	I0731 03:58:29.274769    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:58:29.280508    5599 logs.go:284] 0 containers: []
	W0731 03:58:29.280513    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:58:29.280556    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:58:29.286028    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:58:29.286038    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:58:29.286041    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:58:29.295225    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:58:29.295230    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:58:29.323224    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:58:29.323230    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:58:29.341396    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:58:29.341402    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:58:29.350532    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:58:29.350538    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:58:29.358524    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:58:29.358528    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:58:29.363116    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:58:29.363121    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:58:29.390198    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:58:29.390205    5599 logs.go:123] Gathering logs for kube-controller-manager [c8bc3e1d4d6f] ...
	I0731 03:58:29.390209    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8bc3e1d4d6f"
	I0731 03:58:29.397378    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:58:29.397383    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:58:29.403992    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:58:29.403998    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:58:29.415789    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:58:29.415796    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:58:29.461323    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:58:29.461328    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:58:29.486372    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:58:29.486377    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:58:29.492812    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:58:29.492817    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:58:29.499042    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:58:29.499047    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:58:32.007670    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:32.026936    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:58:32.044637    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:58:32.044786    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:58:32.058091    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:58:32.058203    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:58:32.068658    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:58:32.068753    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:58:32.077880    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:58:32.077961    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:58:32.085672    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:58:32.085746    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:58:32.092283    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:58:32.092330    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:58:32.098025    5599 logs.go:284] 0 containers: []
	W0731 03:58:32.098030    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:58:32.098074    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:58:32.103821    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:58:32.103831    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:58:32.103835    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:58:32.116453    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:58:32.116459    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:58:32.126582    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:58:32.126587    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:58:32.139386    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:58:32.139392    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:58:32.166092    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:58:32.166096    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:58:32.209564    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:58:32.209568    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:58:32.214088    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:58:32.214091    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:58:32.223523    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:58:32.223529    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:58:32.231261    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:58:32.231268    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:58:32.237675    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:58:32.237680    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:58:32.264655    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:58:32.264660    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:58:32.264664    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:58:32.271470    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:58:32.271475    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:58:32.278154    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:58:32.278160    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:58:32.300225    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:58:32.300232    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:58:34.814783    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:34.833118    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:58:34.851288    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:58:34.851430    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:58:34.864157    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:58:34.864261    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:58:34.875380    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:58:34.875465    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:58:34.883782    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:58:34.883851    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:58:34.895081    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:58:34.895155    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:58:34.901712    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:58:34.901755    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:58:34.907514    5599 logs.go:284] 0 containers: []
	W0731 03:58:34.907519    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:58:34.907551    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:58:34.914417    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:58:34.914429    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:58:34.914433    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:58:34.960864    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:58:34.960868    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:58:34.965828    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:58:34.965832    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:58:34.978980    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:58:34.978989    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:58:35.003083    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:58:35.003090    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:58:35.011089    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:58:35.011095    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:58:35.039715    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:58:35.039719    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:58:35.039723    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:58:35.046591    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:58:35.046596    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:58:35.055352    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:58:35.055357    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:58:35.064634    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:58:35.064639    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:58:35.072723    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:58:35.072727    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:58:35.083499    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:58:35.083504    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:58:35.112441    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:58:35.112444    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:58:35.119073    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:58:35.119080    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:58:37.632976    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:37.652453    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:58:37.671099    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:58:37.671219    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:58:37.683205    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:58:37.683300    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:58:37.694454    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:58:37.694552    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:58:37.703073    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:58:37.703170    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:58:37.710253    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:58:37.710297    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:58:37.717188    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:58:37.717242    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:58:37.723234    5599 logs.go:284] 0 containers: []
	W0731 03:58:37.723240    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:58:37.723287    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:58:37.729061    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:58:37.729069    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:58:37.729072    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:58:37.738385    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:58:37.738392    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:58:37.745015    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:58:37.745020    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:58:37.754482    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:58:37.754489    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:58:37.760920    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:58:37.760926    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:58:37.767539    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:58:37.767543    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:58:37.779438    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:58:37.779447    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:58:37.792166    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:58:37.792171    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:58:37.822320    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:58:37.822328    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:58:37.830680    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:58:37.830685    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:58:37.835131    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:58:37.835135    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:58:37.861763    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:58:37.861768    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:58:37.861772    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:58:37.883972    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:58:37.883979    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:58:37.890799    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:58:37.890804    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:58:40.436297    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:40.452418    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:58:40.469166    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:58:40.469305    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:58:40.481135    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:58:40.481232    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:58:40.491575    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:58:40.491656    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:58:40.500431    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:58:40.500500    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:58:40.507853    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:58:40.507905    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:58:40.518575    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:58:40.518622    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:58:40.524520    5599 logs.go:284] 0 containers: []
	W0731 03:58:40.524524    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:58:40.524563    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:58:40.529711    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:58:40.529720    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:58:40.529723    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:58:40.539126    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:58:40.539132    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:58:40.544198    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:58:40.544202    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:58:40.571797    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:58:40.571802    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:58:40.571806    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:58:40.585233    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:58:40.585238    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:58:40.594174    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:58:40.594179    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:58:40.600841    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:58:40.600847    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:58:40.629442    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:58:40.629446    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:58:40.640569    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:58:40.640576    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:58:40.687365    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:58:40.687369    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:58:40.695409    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:58:40.695416    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:58:40.702005    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:58:40.702011    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:58:40.708489    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:58:40.708494    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:58:40.715082    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:58:40.715086    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:58:43.240278    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:43.258096    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:58:43.275310    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:58:43.275468    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:58:43.287650    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:58:43.287773    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:58:43.303301    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:58:43.303387    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:58:43.311680    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:58:43.311756    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:58:43.318513    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:58:43.318560    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:58:43.324590    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:58:43.324640    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:58:43.330380    5599 logs.go:284] 0 containers: []
	W0731 03:58:43.330384    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:58:43.330426    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:58:43.335940    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:58:43.335950    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:58:43.335954    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:58:43.343964    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:58:43.343969    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:58:43.350561    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:58:43.350565    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:58:43.355227    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:58:43.355230    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:58:43.382692    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:58:43.382698    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:58:43.382702    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:58:43.406045    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:58:43.406051    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:58:43.432499    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:58:43.432503    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:58:43.476306    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:58:43.476312    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:58:43.485870    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:58:43.485880    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:58:43.495185    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:58:43.495191    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:58:43.501455    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:58:43.501461    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:58:43.513408    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:58:43.513414    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:58:43.520025    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:58:43.520030    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:58:43.525944    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:58:43.525950    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:58:46.039612    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:46.057851    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:58:46.076541    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:58:46.076681    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:58:46.089350    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:58:46.089452    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:58:46.099894    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:58:46.099975    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:58:46.112290    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:58:46.112352    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:58:46.119081    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:58:46.119138    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:58:46.125322    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:58:46.125368    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:58:46.131050    5599 logs.go:284] 0 containers: []
	W0731 03:58:46.131055    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:58:46.131092    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:58:46.136545    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:58:46.136554    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:58:46.136557    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:58:46.147921    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:58:46.147928    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:58:46.155103    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:58:46.155107    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:58:46.167579    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:58:46.167584    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:58:46.176376    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:58:46.176381    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:58:46.184536    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:58:46.184541    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:58:46.191022    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:58:46.191026    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:58:46.234236    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:58:46.234241    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:58:46.238868    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:58:46.238871    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:58:46.247392    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:58:46.247396    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:58:46.276100    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:58:46.276104    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:58:46.303636    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:58:46.303644    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:58:46.303648    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:58:46.316722    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:58:46.316727    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:58:46.326203    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:58:46.326209    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:58:48.855900    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:48.874827    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:58:48.892144    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:58:48.892272    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:58:48.905126    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:58:48.905230    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:58:48.915417    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:58:48.915496    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:58:48.930999    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:58:48.931068    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:58:48.938008    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:58:48.938064    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:58:48.943945    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:58:48.943994    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:58:48.949396    5599 logs.go:284] 0 containers: []
	W0731 03:58:48.949401    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:58:48.949465    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:58:48.954920    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:58:48.954930    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:58:48.954934    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:58:48.963694    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:58:48.963698    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:58:48.970187    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:58:48.970192    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:58:48.976680    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:58:48.976685    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:58:48.985752    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:58:48.985757    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:58:48.992578    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:58:48.992584    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:58:48.999151    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:58:48.999157    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:58:49.028207    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:58:49.028210    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:58:49.039954    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:58:49.039961    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:58:49.087742    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:58:49.087748    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:58:49.092888    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:58:49.092894    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:58:49.105339    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:58:49.105347    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:58:49.133472    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:58:49.133480    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:58:49.133485    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:58:49.159873    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:58:49.159880    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:58:51.670043    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:51.687455    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:58:51.705806    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:58:51.705973    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:58:51.718479    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:58:51.718584    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:58:51.728654    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:58:51.728750    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:58:51.737669    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:58:51.737758    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:58:51.745131    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:58:51.745184    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:58:51.751923    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:58:51.751976    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:58:51.762312    5599 logs.go:284] 0 containers: []
	W0731 03:58:51.762317    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:58:51.762358    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:58:51.767741    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:58:51.767751    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:58:51.767754    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:58:51.814694    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:58:51.814699    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:58:51.827395    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:58:51.827400    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:58:51.854237    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:58:51.854243    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:58:51.854247    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:58:51.861306    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:58:51.861311    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:58:51.884505    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:58:51.884511    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:58:51.891002    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:58:51.891007    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:58:51.897365    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:58:51.897369    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:58:51.908870    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:58:51.908878    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:58:51.913994    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:58:51.913997    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:58:51.923806    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:58:51.923812    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:58:51.931626    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:58:51.931631    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:58:51.945255    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:58:51.945260    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:58:51.952035    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:58:51.952039    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:58:54.483941    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:54.501515    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:58:54.518612    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:58:54.518733    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:58:54.530188    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:58:54.530285    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:58:54.539928    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:58:54.540005    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:58:54.548936    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:58:54.549052    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:58:54.556438    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:58:54.556491    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:58:54.563313    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:58:54.563360    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:58:54.569065    5599 logs.go:284] 0 containers: []
	W0731 03:58:54.569070    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:58:54.569110    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:58:54.574721    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:58:54.574730    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:58:54.574733    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:58:54.581507    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:58:54.581513    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:58:54.590993    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:58:54.591001    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:58:54.597373    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:58:54.597377    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:58:54.620762    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:58:54.620768    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:58:54.628820    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:58:54.628825    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:58:54.640655    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:58:54.640661    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:58:54.688071    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:58:54.688077    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:58:54.705971    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:58:54.705978    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:58:54.718332    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:58:54.718337    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:58:54.724683    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:58:54.724688    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:58:54.753811    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:58:54.753816    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:58:54.779166    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:58:54.779174    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:58:54.779179    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:58:54.785824    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:58:54.785830    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:58:57.294600    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:58:57.312415    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:58:57.331710    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:58:57.331820    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:58:57.345527    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:58:57.345634    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:58:57.355988    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:58:57.356071    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:58:57.364606    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:58:57.364673    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:58:57.371653    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:58:57.371715    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:58:57.378806    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:58:57.378849    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:58:57.384580    5599 logs.go:284] 0 containers: []
	W0731 03:58:57.384583    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:58:57.384623    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:58:57.394576    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:58:57.394586    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:58:57.394590    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:58:57.442274    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:58:57.442281    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:58:57.449173    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:58:57.449179    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:58:57.455800    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:58:57.455805    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:58:57.462198    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:58:57.462204    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:58:57.485908    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:58:57.485914    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:58:57.512943    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:58:57.512949    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:58:57.512953    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:58:57.522442    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:58:57.522448    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:58:57.532638    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:58:57.532642    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:58:57.540604    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:58:57.540610    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:58:57.546775    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:58:57.546779    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:58:57.551280    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:58:57.551283    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:58:57.563523    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:58:57.563529    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:58:57.590252    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:58:57.590256    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:00.103624    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:00.122483    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:00.140716    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:00.140883    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:00.153573    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:00.153661    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:00.163034    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:00.163129    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:00.171537    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:00.171617    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:00.178940    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:00.179005    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:00.185625    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:59:00.185676    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:00.192070    5599 logs.go:284] 0 containers: []
	W0731 03:59:00.192075    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:00.192120    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:00.197693    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:00.197702    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:00.197705    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:00.243973    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:00.243977    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:00.253902    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:00.253907    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:00.262636    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:00.262641    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:00.269270    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:00.269275    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:00.298093    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:00.298097    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:00.309945    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:00.309951    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:00.314596    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:00.314600    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:00.324578    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:00.324583    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:00.331049    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:00.331054    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:00.356735    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:00.356741    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:00.365237    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:00.365242    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:00.391859    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:00.391863    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:00.391867    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:00.404502    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:59:00.404508    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:59:02.913489    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:02.931762    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:02.949734    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:02.949884    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:02.962810    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:02.962920    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:02.972905    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:02.973002    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:02.987786    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:02.987855    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:02.994623    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:02.994681    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:03.001113    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:59:03.001156    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:03.006728    5599 logs.go:284] 0 containers: []
	W0731 03:59:03.006731    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:03.006764    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:03.012188    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:03.012198    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:03.012201    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:03.058939    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:03.058945    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:03.068120    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:59:03.068125    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:59:03.074705    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:03.074710    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:03.103793    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:03.103797    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:03.117130    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:03.117135    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:03.123626    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:03.123632    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:03.147303    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:03.147309    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:03.154068    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:03.154075    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:03.161717    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:03.161723    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:03.168173    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:03.168178    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:03.172981    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:03.172985    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:03.199428    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:03.199434    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:03.199439    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:03.209105    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:03.209110    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:05.723134    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:05.740330    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:05.757992    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:05.758130    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:05.770356    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:05.770456    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:05.786827    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:05.786896    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:05.794106    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:05.794162    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:05.800873    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:05.800927    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:05.806904    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:59:05.806952    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:05.812684    5599 logs.go:284] 0 containers: []
	W0731 03:59:05.812688    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:05.812724    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:05.821008    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:05.821020    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:05.821025    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:05.867975    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:59:05.867980    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:59:05.874673    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:05.874678    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:05.886979    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:05.886984    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:05.893240    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:05.893245    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:05.898006    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:05.898011    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:05.910341    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:05.910347    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:05.918122    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:05.918127    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:05.924777    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:05.924782    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:05.951671    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:05.951676    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:05.963501    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:05.963507    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:05.990866    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:05.990871    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:05.990875    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:06.014390    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:06.014396    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:06.023767    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:06.023772    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:08.536904    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:08.555362    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:08.572941    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:08.573096    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:08.585698    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:08.585804    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:08.595804    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:08.595891    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:08.605241    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:08.605327    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:08.612908    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:08.612958    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:08.620076    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:59:08.620126    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:08.626124    5599 logs.go:284] 0 containers: []
	W0731 03:59:08.626128    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:08.626174    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:08.631601    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:08.631611    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:08.631614    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:08.636621    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:08.636625    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:08.643448    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:08.643452    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:08.670335    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:08.670339    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:08.713840    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:08.713844    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:08.739997    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:08.740002    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:08.740006    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:08.753124    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:08.753131    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:08.764741    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:08.764749    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:08.777301    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:08.777307    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:08.786950    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:08.786956    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:08.810636    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:59:08.810641    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:59:08.817165    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:08.817170    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:08.833740    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:08.833745    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:08.840018    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:08.840023    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:11.350115    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:11.368660    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:11.387176    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:11.387329    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:11.399770    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:11.399868    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:11.410176    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:11.410256    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:11.418749    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:11.418841    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:11.426042    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:11.426095    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:11.432653    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:59:11.432701    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:11.438558    5599 logs.go:284] 0 containers: []
	W0731 03:59:11.438563    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:11.438607    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:11.444283    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:11.444291    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:11.444294    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:11.456866    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:11.456871    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:11.486032    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:11.486038    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:11.532893    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:11.532898    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:11.559101    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:11.559107    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:11.559112    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:11.568688    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:11.568694    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:11.575226    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:11.575232    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:11.580206    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:11.580210    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:11.589687    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:11.589691    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:11.596111    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:11.596116    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:11.624750    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:11.624756    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:11.632639    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:11.632644    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:11.639231    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:59:11.639234    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:59:11.646440    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:11.646448    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:14.158288    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:14.176596    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:14.195782    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:14.195927    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:14.209227    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:14.209326    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:14.219580    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:14.219676    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:14.228053    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:14.228123    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:14.235511    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:14.235561    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:14.242227    5599 logs.go:284] 1 containers: [af2f059d1432]
	I0731 03:59:14.242298    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:14.248541    5599 logs.go:284] 0 containers: []
	W0731 03:59:14.248545    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:14.248585    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:14.254294    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:14.254303    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:14.254308    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:14.258863    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:14.258867    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:14.271300    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:14.271306    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:14.279359    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:14.279364    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:14.285527    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:14.285532    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:14.297936    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:14.297941    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:14.304330    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:14.304335    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:14.310889    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:59:14.310893    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:59:14.317027    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:14.317032    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:14.362686    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:14.362690    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:14.400392    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:14.400397    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:14.400401    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:14.411255    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:14.411263    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:14.440563    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:14.440571    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:14.471231    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:14.471236    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:17.003173    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:17.021562    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:17.040029    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:17.040175    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:17.053341    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:17.053417    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:17.063408    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:17.063492    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:17.071922    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:17.071989    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:17.083101    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:17.083145    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:17.089621    5599 logs.go:284] 2 containers: [b0481f42e8f4 af2f059d1432]
	I0731 03:59:17.089678    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:17.095584    5599 logs.go:284] 0 containers: []
	W0731 03:59:17.095588    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:17.095631    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:17.100912    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:17.100920    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:17.100924    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:17.110158    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:17.110164    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:17.133480    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:17.133485    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:17.141316    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:17.141321    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:17.147626    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:17.147631    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:17.160262    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:17.160268    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:17.169950    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:17.169956    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:17.176529    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:17.176533    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:17.222610    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:17.222616    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:17.249149    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:17.249156    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:17.249160    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:17.253932    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:17.253935    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:17.261020    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:59:17.261025    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:59:17.267463    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:17.267470    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:17.276212    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:17.276217    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:17.303188    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:17.303194    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:19.817455    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:19.836265    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:19.853893    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:19.854010    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:19.868781    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:19.868888    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:19.879194    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:19.879276    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:19.887344    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:19.887438    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:19.894438    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:19.894490    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:19.900967    5599 logs.go:284] 2 containers: [b0481f42e8f4 af2f059d1432]
	I0731 03:59:19.901018    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:19.907171    5599 logs.go:284] 0 containers: []
	W0731 03:59:19.907175    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:19.907214    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:19.912602    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:19.912612    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:59:19.912616    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:59:19.919033    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:19.919038    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:19.925401    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:19.925407    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:19.970782    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:19.970786    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:19.979089    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:19.979095    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:19.985618    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:19.985623    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:19.992402    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:19.992407    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:20.019247    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:20.019251    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:20.030771    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:20.030782    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:20.040047    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:20.040052    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:20.049606    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:20.049611    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:20.056055    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:20.056060    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:20.079526    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:20.079532    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:20.084966    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:20.084970    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:20.115532    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:20.115538    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:20.115542    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:22.631262    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:22.650140    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:22.668003    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:22.668133    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:22.680918    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:22.681038    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:22.694682    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:22.694757    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:22.703167    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:22.703240    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:22.710141    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:22.710194    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:22.716756    5599 logs.go:284] 2 containers: [b0481f42e8f4 af2f059d1432]
	I0731 03:59:22.716801    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:22.722490    5599 logs.go:284] 0 containers: []
	W0731 03:59:22.722495    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:22.722541    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:22.728242    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:22.728251    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:22.728255    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:22.757842    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:22.757847    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:22.767385    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:22.767389    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:22.785972    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:22.785978    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:22.792374    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:22.792380    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:22.805137    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:22.805141    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:22.839094    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:22.839100    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:22.849859    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:59:22.849865    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:59:22.856518    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:22.856524    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:22.863012    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:22.863018    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:22.868040    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:22.868044    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:22.896600    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:22.896604    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:22.896608    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:22.912471    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:22.912476    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:22.918702    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:22.918707    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:22.931007    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:22.931015    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:25.482866    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:25.500912    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:25.519447    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:25.519565    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:25.538352    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:25.538463    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:25.547462    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:25.547523    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:25.561872    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:25.561936    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:25.568372    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:25.568418    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:25.581878    5599 logs.go:284] 2 containers: [b0481f42e8f4 af2f059d1432]
	I0731 03:59:25.581926    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:25.587198    5599 logs.go:284] 0 containers: []
	W0731 03:59:25.587203    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:25.587245    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:25.592944    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:25.592955    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:25.592958    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:25.641132    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:25.641136    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:25.668605    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:25.668610    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:25.668613    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:25.678763    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:25.678769    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:25.686748    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:25.686753    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:25.692906    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:25.692911    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:25.705247    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:25.705252    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:25.711590    5599 logs.go:123] Gathering logs for kube-controller-manager [af2f059d1432] ...
	I0731 03:59:25.711595    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af2f059d1432"
	I0731 03:59:25.722758    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:25.722763    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:25.749592    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:25.749595    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:25.758983    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:25.758989    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:25.784661    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:25.784666    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:25.792152    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:25.792157    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:25.798681    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:25.798686    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:25.810355    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:25.810362    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:28.317066    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:28.335413    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:28.353263    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:28.353402    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:28.366123    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:28.366212    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:28.376373    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:28.376465    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:28.385160    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:28.385249    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:28.392604    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:28.392655    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:28.399463    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 03:59:28.399517    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:28.409314    5599 logs.go:284] 0 containers: []
	W0731 03:59:28.409319    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:28.409362    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:28.414598    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:28.414608    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:28.414611    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:28.463408    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:28.463412    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:28.468488    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:28.468491    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:28.476274    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:28.476279    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:28.503158    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:28.503162    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:28.510055    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:28.510061    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:28.516803    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:28.516808    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:28.543338    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:28.543351    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:28.543356    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:28.558743    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:28.558749    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:28.583676    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:28.583682    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:28.590463    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:28.590469    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:28.596643    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:28.596650    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:28.608033    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:28.608040    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:28.617225    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:28.617231    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:31.129062    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:31.147189    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:31.164064    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:31.164220    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:31.176585    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:31.176699    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:31.187026    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:31.187113    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:31.195469    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:31.195533    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:31.203021    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:31.203077    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:31.209486    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 03:59:31.209530    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:31.215682    5599 logs.go:284] 0 containers: []
	W0731 03:59:31.215687    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:31.215726    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:31.221421    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:31.221430    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:31.221433    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:31.260663    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:31.260669    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:31.260673    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:31.278656    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:31.278661    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:31.288323    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:31.288329    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:31.314001    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:31.314007    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:31.364145    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:31.364159    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:31.372510    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:31.372515    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:31.378872    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:31.378878    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:31.390689    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:31.390698    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:31.397305    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:31.397309    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:31.403583    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:31.403588    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:31.412181    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:31.412186    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:31.418698    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:31.418702    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:31.444748    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:31.444751    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:33.951816    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:33.970135    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:33.988321    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:33.988434    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:34.002101    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:34.002184    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:34.011717    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:34.011806    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:34.020507    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:34.020575    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:34.027871    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:34.027922    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:34.041272    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 03:59:34.041334    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:34.046964    5599 logs.go:284] 0 containers: []
	W0731 03:59:34.046969    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:34.047014    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:34.052648    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:34.052658    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:34.052661    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:34.057420    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:34.057424    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:34.063905    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:34.063911    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:34.070114    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:34.070120    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:34.085217    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:34.085222    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:34.094891    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:34.094897    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:34.119441    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:34.119446    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:34.169026    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:34.169030    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:34.182609    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:34.182615    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:34.189021    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:34.189026    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:34.214752    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:34.214758    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:34.214763    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:34.223601    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:34.223607    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:34.231703    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:34.231709    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:34.260697    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:34.260701    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:36.774751    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:36.792468    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:36.811004    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:36.811158    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:36.823747    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:36.823844    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:36.834144    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:36.834229    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:36.846307    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:36.846373    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:36.853508    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:36.853561    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:36.860124    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 03:59:36.860193    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:36.866071    5599 logs.go:284] 0 containers: []
	W0731 03:59:36.866075    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:36.866112    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:36.871527    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:36.871536    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:36.871539    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:36.917751    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:36.917756    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:36.927586    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:36.927591    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:36.934199    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:36.934204    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:36.962832    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:36.962836    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:36.979602    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:36.979607    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:36.988688    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:36.988693    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:36.996816    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:36.996821    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:37.003360    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:37.003364    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:37.015220    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:37.015226    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:37.022283    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:37.022288    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:37.027680    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:37.027685    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:37.055054    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:37.055059    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:37.055064    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:37.080537    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:37.080546    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:39.591065    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:39.609310    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:39.627206    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:39.627373    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:39.640037    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:39.640136    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:39.650370    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:39.650459    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:39.658890    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:39.658959    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:39.666183    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:39.666239    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:39.672862    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 03:59:39.672907    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:39.679096    5599 logs.go:284] 0 containers: []
	W0731 03:59:39.679101    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:39.679149    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:39.685167    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:39.685177    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:39.685180    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:39.731258    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:39.731263    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:39.743105    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:39.743112    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:39.749559    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:39.749565    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:39.754383    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:39.754387    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:39.764039    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:39.764044    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:39.770641    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:39.770646    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:39.778437    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:39.778443    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:39.784854    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:39.784861    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:39.793754    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:39.793758    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:39.800477    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:39.800483    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:39.830346    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:39.830350    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:39.842748    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:39.842755    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:39.870194    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:39.870199    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:39.870203    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:42.398300    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:42.415945    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:42.434492    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:42.434629    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:42.446860    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:42.446950    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:42.456839    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:42.456946    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:42.465411    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:42.465483    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:42.472896    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:42.472953    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:42.479176    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 03:59:42.479222    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:42.485058    5599 logs.go:284] 0 containers: []
	W0731 03:59:42.485063    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:42.485111    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:42.491097    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:42.491106    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:42.491109    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:42.500706    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:42.500712    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:42.507312    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:42.507316    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:42.513832    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:42.513837    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:42.540468    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:42.540474    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:42.548421    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:42.548426    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:42.554680    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:42.554685    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:42.567508    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:42.567513    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:42.579350    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:42.579356    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:42.628908    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:42.628912    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:42.633445    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:42.633448    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:42.659094    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:42.659099    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:42.659103    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:42.668688    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:42.668692    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:42.675312    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:42.675318    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:45.207043    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:45.220880    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:45.237425    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:45.237539    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:45.248683    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:45.248774    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:45.260120    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:45.260211    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:45.268342    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:45.268412    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:45.281614    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:45.281662    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:45.287487    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 03:59:45.287533    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:45.292919    5599 logs.go:284] 0 containers: []
	W0731 03:59:45.292923    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:45.292960    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:45.298106    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:45.298117    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:45.298121    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:45.349783    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:45.349789    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:45.364830    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:45.364837    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:45.373062    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:45.373068    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:45.379335    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:45.379340    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:45.390816    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:45.390823    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:45.418244    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:45.418250    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:45.418254    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:45.430549    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:45.430554    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:45.437969    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:45.437975    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:45.466708    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:45.466712    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:45.474200    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:45.474205    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:45.478658    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:45.478661    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:45.490729    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:45.490736    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:45.516212    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:45.516218    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:48.025052    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:48.044887    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:48.063262    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:48.063434    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:48.077087    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:48.077189    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:48.093976    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:48.094055    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:48.101957    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:48.102014    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:48.108674    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:48.108718    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:48.114949    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 03:59:48.114990    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:48.120668    5599 logs.go:284] 0 containers: []
	W0731 03:59:48.120673    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:48.120717    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:48.126002    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:48.126013    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:48.126016    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:48.137844    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:48.137852    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:48.151629    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:48.151635    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:48.177691    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:48.177697    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:48.189998    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:48.190005    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:48.236600    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:48.236603    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:48.263035    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:48.263040    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:48.263044    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:48.269850    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:48.269856    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:48.274647    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:48.274650    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:48.281295    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:48.281299    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:48.287916    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:48.287922    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:48.314851    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:48.314855    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:48.321509    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:48.321516    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:48.331734    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:48.331739    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:50.843099    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:50.862259    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:50.880296    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:50.880458    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:50.893188    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:50.893285    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:50.903446    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:50.903518    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:50.917793    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:50.917850    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:50.924998    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:50.925057    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:50.931694    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 03:59:50.931741    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:50.937637    5599 logs.go:284] 0 containers: []
	W0731 03:59:50.937641    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:50.937678    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:50.942821    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:50.942832    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:50.942836    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:50.969473    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:50.969478    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:50.981787    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:50.981795    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:50.986720    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:50.986724    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:50.996182    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:50.996187    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:51.002577    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:51.002581    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:51.009055    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:51.009061    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:51.037885    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:51.037892    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:51.088083    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:51.088087    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:51.116438    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:51.116444    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:51.116448    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:51.128842    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:51.128846    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:51.142625    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:51.142631    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:51.154298    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:51.154303    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:51.163018    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:51.163025    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:53.672484    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:53.690585    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:53.707944    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:53.708102    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:53.721199    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:53.721293    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:53.730744    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:53.730828    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:53.741727    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:53.741802    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:53.749025    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:53.749071    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:53.755706    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 03:59:53.755759    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:53.761833    5599 logs.go:284] 0 containers: []
	W0731 03:59:53.761838    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:53.761883    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:53.773436    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:53.773447    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:53.773455    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:53.823131    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:53.823136    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:53.848873    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:53.848878    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:53.848883    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:53.858966    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:53.858972    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:53.888210    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:53.888215    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:53.894739    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:53.894744    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:53.920733    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:53.920738    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:53.927530    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:53.927535    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:53.938866    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:53.938874    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:53.956561    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:53.956566    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:53.964523    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:53.964528    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:53.975053    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:53.975058    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:53.980309    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:53.980312    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:53.989626    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:53.989632    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:56.515432    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:56.533656    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:56.551807    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:56.551963    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:56.566724    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:56.566808    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:56.582094    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:56.582169    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:56.590123    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:56.590190    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:56.599311    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:56.599360    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:56.605473    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 03:59:56.605528    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:56.611116    5599 logs.go:284] 0 containers: []
	W0731 03:59:56.611121    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:56.611162    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:56.616429    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:56.616438    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:56.616440    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:56.666453    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:56.666457    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:56.671855    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:56.671858    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:56.698523    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:56.698528    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:56.698533    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:56.714749    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:56.714753    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:56.721944    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:56.721949    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:56.729620    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:56.729626    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:56.741193    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:56.741200    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:56.754168    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:56.754173    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:56.782447    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:56.782453    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:56.789624    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:56.789630    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:56.795903    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:56.795908    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:56.824831    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:56.824836    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 03:59:56.833915    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:56.833920    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:59.341551    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 03:59:59.360069    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 03:59:59.380894    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 03:59:59.381022    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 03:59:59.393141    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 03:59:59.393229    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 03:59:59.402880    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 03:59:59.402940    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 03:59:59.411295    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 03:59:59.411341    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 03:59:59.418637    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 03:59:59.418693    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 03:59:59.427805    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 03:59:59.427863    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 03:59:59.433532    5599 logs.go:284] 0 containers: []
	W0731 03:59:59.433537    5599 logs.go:286] No container was found matching "kindnet"
	I0731 03:59:59.433579    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 03:59:59.438932    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 03:59:59.438940    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 03:59:59.438943    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 03:59:59.444084    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 03:59:59.444087    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 03:59:59.450504    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 03:59:59.450511    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 03:59:59.460062    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 03:59:59.460068    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 03:59:59.486382    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 03:59:59.486387    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 03:59:59.493061    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 03:59:59.493066    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 03:59:59.499727    5599 logs.go:123] Gathering logs for Docker ...
	I0731 03:59:59.499731    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 03:59:59.528215    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 03:59:59.528218    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 03:59:59.577370    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 03:59:59.577375    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 03:59:59.585684    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 03:59:59.585690    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 03:59:59.591869    5599 logs.go:123] Gathering logs for container status ...
	I0731 03:59:59.591873    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 03:59:59.603016    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 03:59:59.603021    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 03:59:59.629629    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 03:59:59.629638    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 03:59:59.629642    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 03:59:59.643206    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 03:59:59.643214    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:02.154970    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:02.173901    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:02.191282    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:02.191399    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:02.202532    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:02.202614    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:02.212416    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:02.212490    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:02.225416    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:02.225484    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:02.231924    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:02.231977    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:02.242127    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:02.242182    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:02.247555    5599 logs.go:284] 0 containers: []
	W0731 04:00:02.247559    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:02.247594    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:02.253246    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:02.253256    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:02.253259    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:02.259545    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:02.259549    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:02.272073    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:02.272078    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:02.297605    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:02.297610    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:02.297615    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:02.306982    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:02.306988    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:02.316383    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:02.316387    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:02.322629    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:02.322633    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:02.353836    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:02.353842    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:02.360906    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:02.360911    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:02.387641    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:02.387646    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:02.434377    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:02.434381    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:02.445557    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:02.445564    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:02.453594    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:02.453600    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:02.460009    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:02.460014    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:04.967136    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:04.984923    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:05.002312    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:05.002448    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:05.015442    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:05.015543    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:05.025524    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:05.025601    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:05.033997    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:05.034072    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:05.041388    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:05.041438    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:05.048089    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:05.048133    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:05.054335    5599 logs.go:284] 0 containers: []
	W0731 04:00:05.054340    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:05.054388    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:05.060478    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:05.060488    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:05.060492    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:05.072358    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:05.072363    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:05.109104    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:05.109109    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:05.135608    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:05.135612    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:05.161954    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:05.161961    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:05.161965    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:05.168748    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:05.168753    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:05.174860    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:05.174864    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:05.183573    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:05.183578    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:05.196705    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:05.196711    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:05.203399    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:05.203409    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:05.214951    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:05.214958    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:05.262430    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:05.262435    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:05.267274    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:05.267278    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:05.283057    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:05.283062    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:07.792001    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:07.810545    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:07.828880    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:07.829027    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:07.842150    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:07.842251    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:07.852813    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:07.852892    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:07.865266    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:07.865337    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:07.872352    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:07.872408    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:07.878784    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:07.878830    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:07.884539    5599 logs.go:284] 0 containers: []
	W0731 04:00:07.884542    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:07.884578    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:07.890224    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:07.890233    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:07.890236    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:07.905313    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:07.905321    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:07.914420    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:07.914426    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:07.921236    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:07.921241    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:07.948067    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:07.948073    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:07.948077    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:07.955119    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:07.955126    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:07.963078    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:07.963084    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:07.989921    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:07.989926    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:07.999437    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:07.999442    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:08.004356    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:08.004360    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:08.038150    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:08.038156    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:08.088403    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:08.088408    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:08.095453    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:08.095458    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:08.106877    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:08.106884    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:10.615685    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:10.634800    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:10.652026    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:10.652188    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:10.665049    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:10.665135    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:10.675876    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:10.675976    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:10.685002    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:10.685065    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:10.692932    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:10.692987    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:10.699339    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:10.699388    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:10.705198    5599 logs.go:284] 0 containers: []
	W0731 04:00:10.705203    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:10.705248    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:10.710894    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:10.710904    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:10.710907    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:10.760777    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:10.760781    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:10.767529    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:10.767534    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:10.777713    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:10.777717    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:10.784287    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:10.784293    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:10.811115    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:10.811120    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:10.816242    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:10.816245    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:10.825423    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:10.825429    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:10.837204    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:10.837211    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:10.846704    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:10.846709    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:10.874313    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:10.874319    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:10.882738    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:10.882744    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:10.908755    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:10.908762    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:10.908766    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:10.921336    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:10.921342    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:13.432113    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:13.449645    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:13.468158    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:13.468298    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:13.483071    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:13.483183    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:13.493329    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:13.493408    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:13.501576    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:13.501632    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:13.508253    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:13.508299    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:13.515019    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:13.515060    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:13.520754    5599 logs.go:284] 0 containers: []
	W0731 04:00:13.520759    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:13.520805    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:13.526458    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:13.526467    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:13.526471    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:13.552496    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:13.552501    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:13.552505    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:13.565389    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:13.565394    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:13.574792    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:13.574796    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:13.581949    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:13.581955    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:13.609646    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:13.609652    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:13.617749    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:13.617754    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:13.644099    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:13.644105    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:13.655790    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:13.655797    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:13.660579    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:13.660582    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:13.710588    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:13.710592    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:13.720184    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:13.720190    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:13.733942    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:13.733947    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:13.740300    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:13.740305    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:16.249294    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:16.268191    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:16.286562    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:16.286716    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:16.299708    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:16.299791    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:16.314005    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:16.314087    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:16.322648    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:16.322743    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:16.329595    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:16.329661    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:16.341672    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:16.341737    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:16.347582    5599 logs.go:284] 0 containers: []
	W0731 04:00:16.347587    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:16.347629    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:16.353073    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:16.353081    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:16.353084    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:16.400074    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:16.400078    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:16.404774    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:16.404777    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:16.413784    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:16.413790    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:16.440166    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:16.440170    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:16.461064    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:16.461069    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:16.489911    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:16.489917    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:16.497374    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:16.497381    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:16.509983    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:16.509991    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:16.519318    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:16.519325    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:16.527006    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:16.527015    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:16.533252    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:16.533256    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:16.560132    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:16.560137    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:16.560141    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:16.566543    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:16.566549    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:19.075722    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:19.095209    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:19.113525    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:19.113688    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:19.127491    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:19.127586    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:19.138214    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:19.138298    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:19.146989    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:19.147054    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:19.154474    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:19.154531    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:19.160833    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:19.160888    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:19.170569    5599 logs.go:284] 0 containers: []
	W0731 04:00:19.170573    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:19.170619    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:19.176023    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:19.176033    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:19.176036    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:19.223304    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:19.223309    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:19.228337    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:19.228340    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:19.256461    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:19.256466    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:19.256470    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:19.269558    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:19.269563    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:19.278570    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:19.278576    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:19.306287    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:19.306293    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:19.312504    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:19.312510    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:19.326844    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:19.326849    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:19.333270    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:19.333276    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:19.342834    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:19.342840    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:19.349113    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:19.349118    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:19.355684    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:19.355688    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:19.382437    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:19.382440    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:21.896789    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:21.914797    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:21.931697    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:21.931862    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:21.945575    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:21.945676    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:21.960190    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:21.960279    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:21.968797    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:21.968873    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:21.976289    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:21.976340    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:21.982851    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:21.982902    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:21.988421    5599 logs.go:284] 0 containers: []
	W0731 04:00:21.988428    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:21.988485    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:21.993894    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:21.993902    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:21.993906    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:21.998370    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:21.998372    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:22.030021    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:22.030027    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:22.038386    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:22.038392    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:22.044708    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:22.044713    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:22.071208    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:22.071214    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:22.071218    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:22.084241    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:22.084247    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:22.093505    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:22.093509    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:22.099829    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:22.099835    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:22.106305    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:22.106310    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:22.115567    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:22.115572    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:22.122013    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:22.122018    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:22.148124    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:22.148129    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:22.194943    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:22.194947    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:24.709571    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:24.728989    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:24.746213    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:24.746368    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:24.759683    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:24.759786    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:24.770799    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:24.770884    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:24.779822    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:24.779892    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:24.787212    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:24.787260    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:24.793991    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:24.794043    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:24.802968    5599 logs.go:284] 0 containers: []
	W0731 04:00:24.802973    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:24.803010    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:24.810876    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:24.810884    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:24.810887    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:24.838487    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:24.838492    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:24.847802    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:24.847809    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:24.854342    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:24.854347    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:24.880641    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:24.880646    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:24.880650    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:24.909435    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:24.909442    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:24.915991    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:24.916000    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:24.927477    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:24.927485    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:24.978061    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:24.978065    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:24.982513    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:24.982516    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:24.990598    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:24.990602    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:24.996649    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:24.996653    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:25.009049    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:25.009054    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:25.018283    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:25.018287    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:27.527242    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:27.545683    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:27.564731    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:27.564882    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:27.577904    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:27.578025    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:27.594746    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:27.594830    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:27.602525    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:27.602595    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:27.609711    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:27.609764    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:27.617712    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:27.617770    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:27.623316    5599 logs.go:284] 0 containers: []
	W0731 04:00:27.623321    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:27.623365    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:27.628816    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:27.628826    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:27.628829    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:27.633541    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:27.633544    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:27.662138    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:27.662144    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:27.668964    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:27.668970    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:27.696107    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:27.696114    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:27.696118    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:27.703179    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:27.703185    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:27.732006    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:27.732011    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:27.739075    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:27.739080    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:27.751441    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:27.751448    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:27.798638    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:27.798642    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:27.810827    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:27.810831    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:27.819792    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:27.819797    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:27.829337    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:27.829343    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:27.838873    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:27.838878    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:30.348975    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:30.367744    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:30.384838    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:30.384985    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:30.402195    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:30.402308    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:30.412287    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:30.412372    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:30.420616    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:30.420681    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:30.428123    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:30.428172    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:30.438610    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:30.438663    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:30.444076    5599 logs.go:284] 0 containers: []
	W0731 04:00:30.444080    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:30.444117    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:30.454369    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:30.454380    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:30.454384    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:30.462100    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:30.462106    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:30.468567    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:30.468571    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:30.518986    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:30.518991    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:30.531671    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:30.531675    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:30.540446    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:30.540452    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:30.546827    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:30.546832    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:30.579635    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:30.579640    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:30.586244    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:30.586249    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:30.612316    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:30.612322    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:30.618882    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:30.618888    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:30.623213    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:30.623216    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:30.651391    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:30.651397    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:30.651400    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:30.661001    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:30.661006    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:33.174734    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:33.194461    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:33.212084    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:33.212257    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:33.224547    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:33.224648    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:33.234850    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:33.234938    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:33.243837    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:33.243900    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:33.251608    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:33.251665    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:33.258459    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:33.258507    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:33.264522    5599 logs.go:284] 0 containers: []
	W0731 04:00:33.264526    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:33.264567    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:33.270173    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:33.270184    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:33.270187    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:33.276676    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:33.276681    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:33.288300    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:33.288306    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:33.295365    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:33.295370    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:33.307507    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:33.307512    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:33.316424    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:33.316430    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:33.330317    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:33.330322    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:33.336788    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:33.336792    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:33.364005    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:33.364009    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:33.411383    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:33.411390    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:33.419558    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:33.419563    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:33.425928    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:33.425934    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:33.454205    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:33.454209    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:33.479602    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:33.479608    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:33.479612    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:35.986957    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:36.005693    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:36.024550    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:36.024697    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:36.037507    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:36.037584    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:36.047422    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:36.047504    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:36.056255    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:36.056338    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:36.063752    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:36.063806    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:36.070378    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:36.070433    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:36.076484    5599 logs.go:284] 0 containers: []
	W0731 04:00:36.076488    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:36.076528    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:36.082268    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:36.082279    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:36.082282    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:36.090391    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:36.090397    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:36.097144    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:36.097148    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:36.113254    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:36.113262    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:36.142614    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:36.142621    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:36.152337    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:36.152342    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:36.162334    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:36.162340    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:36.168810    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:36.168815    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:36.195647    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:36.195654    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:36.195657    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:36.224829    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:36.224833    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:36.229822    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:36.229825    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:36.243887    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:36.243893    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:36.250602    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:36.250607    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:36.257400    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:36.257406    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:38.810671    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:38.830159    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:38.849394    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:38.849534    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:38.862668    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:38.862756    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:38.872265    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:38.872342    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:38.881126    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:38.881203    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:38.889084    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:38.889136    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:38.895736    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:38.895790    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:38.901894    5599 logs.go:284] 0 containers: []
	W0731 04:00:38.901898    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:38.901940    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:38.907466    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:38.907477    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:38.907481    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:38.912584    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:38.912588    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:38.919352    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:38.919358    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:38.947935    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:38.947941    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:38.977166    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:38.977170    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:39.004208    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:39.004212    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:39.004215    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:39.013802    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:39.013808    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:39.064079    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:39.064084    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:39.072044    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:39.072050    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:39.078676    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:39.078682    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:39.085326    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:39.085332    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:39.097544    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:39.097549    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:39.107023    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:39.107028    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:39.113521    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:39.113526    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:41.626883    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:41.644900    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:41.662989    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:41.663113    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:41.676133    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:41.676243    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:41.687481    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:41.687582    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:41.696447    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:41.696500    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:41.704168    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:41.704218    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:41.710863    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:41.710916    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:41.716634    5599 logs.go:284] 0 containers: []
	W0731 04:00:41.716639    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:41.716682    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:41.722078    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:41.722089    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:41.722093    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:41.726583    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:41.726587    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:41.737505    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:41.737511    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:41.789015    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:41.789021    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:41.816369    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:41.816373    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:41.816377    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:41.826096    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:41.826103    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:41.855446    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:41.855455    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:41.861968    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:41.861973    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:41.890403    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:41.890411    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:41.897263    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:41.897268    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:41.903963    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:41.903968    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:41.916145    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:41.916150    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:41.925080    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:41.925084    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:41.931562    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:41.931567    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:44.441321    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:44.451272    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:44.465087    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:44.465161    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:44.472242    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:44.472306    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:44.478506    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:44.478577    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:44.484724    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:44.484800    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:44.491927    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:44.491995    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:44.498130    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:44.498193    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:44.504349    5599 logs.go:284] 0 containers: []
	W0731 04:00:44.504356    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:44.504418    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:44.511361    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:44.511374    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:44.511378    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:44.518413    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:44.518421    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:44.525650    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:44.525659    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:44.540117    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:44.540126    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:44.552788    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:44.552796    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:44.566542    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:44.566550    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:44.596076    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:44.596082    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:44.604280    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:44.604286    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:44.630223    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:44.630228    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:44.630233    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:44.639499    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:44.639508    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:44.666295    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:44.666301    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:44.673282    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:44.673288    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:44.724924    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:44.724928    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:44.730033    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:44.730036    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:47.242048    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:47.260721    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:47.284308    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:47.284435    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:47.296838    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:47.296928    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:47.306271    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:47.306342    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:47.315126    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:47.315177    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:47.322103    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:47.322159    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:47.328501    5599 logs.go:284] 1 containers: [b0481f42e8f4]
	I0731 04:00:47.328553    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:47.334068    5599 logs.go:284] 0 containers: []
	W0731 04:00:47.334080    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:47.334126    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:47.339526    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:47.339537    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:47.339541    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:47.366694    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:47.366700    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:47.366704    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:47.380213    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:47.380219    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:47.390092    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:47.390097    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:47.398307    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:47.398311    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:47.404827    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:47.404832    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:47.456278    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:47.456282    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:47.463040    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:47.463046    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:47.467515    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:47.467518    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:47.474508    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:47.474514    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:47.502939    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:47.502943    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:47.514063    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:47.514069    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:47.543306    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:47.543311    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:47.550006    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:47.550012    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:50.061481    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:50.068018    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:50.075499    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:50.075561    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:50.082396    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:50.082450    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:50.088379    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:50.088428    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:50.094203    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:50.094251    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:50.099419    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:50.099463    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:50.111531    5599 logs.go:284] 2 containers: [1e95c05b9ec9 b0481f42e8f4]
	I0731 04:00:50.111579    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:50.116602    5599 logs.go:284] 0 containers: []
	W0731 04:00:50.116606    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:50.116644    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:50.121749    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:50.121760    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:50.121763    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:50.170631    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:50.170636    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:50.181390    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:50.181396    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:50.208082    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:50.208086    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:50.219923    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:50.219928    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:50.227574    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:50.227578    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:50.233911    5599 logs.go:123] Gathering logs for kube-controller-manager [1e95c05b9ec9] ...
	I0731 04:00:50.233916    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e95c05b9ec9"
	I0731 04:00:50.239767    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:50.239773    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:50.245983    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:50.245988    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:50.257165    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:50.257171    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:50.262323    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:50.262326    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:50.271396    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:50.271402    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:50.280785    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:50.280790    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:50.309953    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:50.309959    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:50.336865    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:50.336871    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:50.336875    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:52.846459    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:52.865294    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:52.883944    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:52.884084    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:52.900803    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:52.900916    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:52.911800    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:52.911890    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:52.919521    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:52.919581    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:52.927649    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:52.927705    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:52.934006    5599 logs.go:284] 2 containers: [1e95c05b9ec9 b0481f42e8f4]
	I0731 04:00:52.934055    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:52.939700    5599 logs.go:284] 0 containers: []
	W0731 04:00:52.939704    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:52.939745    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:52.945435    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:52.945445    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:52.945448    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:52.952138    5599 logs.go:123] Gathering logs for kube-controller-manager [1e95c05b9ec9] ...
	I0731 04:00:52.952144    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e95c05b9ec9"
	I0731 04:00:52.958629    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:52.958634    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:52.964873    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:52.964878    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:52.993504    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:52.993508    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:53.005562    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:53.005567    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:53.010592    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:53.010595    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:53.023197    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:53.023202    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:53.052526    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:53.052532    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:53.104635    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:53.104639    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:53.118023    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:53.118028    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:53.124478    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:53.124485    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:53.136259    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:53.136266    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:53.149420    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:53.149425    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:53.155959    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:53.155965    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:53.182841    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:55.685361    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:55.704347    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:55.723119    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:55.723242    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:55.736186    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:55.736292    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:55.746757    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:55.746851    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:55.755929    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:55.755999    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:55.767318    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:55.767369    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:55.781089    5599 logs.go:284] 2 containers: [1e95c05b9ec9 b0481f42e8f4]
	I0731 04:00:55.781149    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:55.787100    5599 logs.go:284] 0 containers: []
	W0731 04:00:55.787105    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:55.787152    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:55.792573    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:55.792584    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:55.792587    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:55.801940    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:55.801945    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:55.832337    5599 logs.go:123] Gathering logs for kube-controller-manager [1e95c05b9ec9] ...
	I0731 04:00:55.832342    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e95c05b9ec9"
	I0731 04:00:55.838799    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:55.838805    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:55.889989    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:55.889993    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:55.901730    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:55.901735    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:00:55.912318    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:55.912324    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:55.919139    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:55.919144    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:55.925443    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:55.925448    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:55.955064    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:55.955070    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:55.959625    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:55.959628    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:55.972205    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:55.972211    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:55.979108    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:55.979113    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:55.990592    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:55.990599    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:56.023639    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:56.023645    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:56.023649    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:58.536107    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:00:58.554823    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:00:58.574526    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:00:58.574661    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:00:58.587489    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:00:58.587594    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:00:58.597061    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:00:58.597145    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:00:58.606031    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:00:58.606104    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:00:58.613243    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:00:58.613295    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:00:58.619954    5599 logs.go:284] 2 containers: [1e95c05b9ec9 b0481f42e8f4]
	I0731 04:00:58.620002    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:00:58.625966    5599 logs.go:284] 0 containers: []
	W0731 04:00:58.625971    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:00:58.626014    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:00:58.631932    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:00:58.631941    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:00:58.631944    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:00:58.638583    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:00:58.638587    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:00:58.644669    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:00:58.644673    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:00:58.672504    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:00:58.672510    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:00:58.684517    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:00:58.684525    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:00:58.694033    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:00:58.694038    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:00:58.701928    5599 logs.go:123] Gathering logs for kube-controller-manager [1e95c05b9ec9] ...
	I0731 04:00:58.701933    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e95c05b9ec9"
	I0731 04:00:58.708214    5599 logs.go:123] Gathering logs for kube-controller-manager [b0481f42e8f4] ...
	I0731 04:00:58.708219    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0481f42e8f4"
	I0731 04:00:58.714484    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:00:58.714489    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:00:58.743640    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:00:58.743646    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:00:58.793735    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:00:58.793739    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:00:58.821010    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:00:58.821016    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:00:58.821021    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:00:58.838512    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:00:58.838517    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:00:58.850922    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:00:58.850926    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:00:58.855824    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:00:58.855828    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:01:01.364556    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:01:01.380697    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:01:01.397564    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:01:01.397680    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:01:01.410290    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:01:01.410394    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:01:01.419819    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:01:01.419923    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:01:01.428377    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:01:01.428469    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:01:01.435398    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:01:01.435448    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:01:01.441862    5599 logs.go:284] 1 containers: [1e95c05b9ec9]
	I0731 04:01:01.441906    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:01:01.448071    5599 logs.go:284] 0 containers: []
	W0731 04:01:01.448075    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:01:01.448118    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:01:01.453968    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:01:01.453980    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:01:01.453983    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:01:01.506869    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:01:01.506874    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:01:01.513567    5599 logs.go:123] Gathering logs for kube-controller-manager [1e95c05b9ec9] ...
	I0731 04:01:01.513576    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e95c05b9ec9"
	I0731 04:01:01.520504    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:01:01.520509    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:01:01.533114    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:01:01.533119    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:01:01.542180    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:01:01.542186    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:01:01.549913    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:01:01.549917    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:01:01.556581    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:01:01.556586    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:01:01.561043    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:01:01.561047    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:01:01.588196    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:01:01.588201    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:01:01.588204    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:01:01.598045    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:01:01.598052    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:01:01.604895    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:01:01.604899    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:01:01.633618    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:01:01.633622    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:01:01.663480    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:01:01.663485    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:01:04.177430    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:01:04.194294    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:01:04.212159    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:01:04.212279    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:01:04.224843    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:01:04.224940    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:01:04.235163    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:01:04.235242    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:01:04.243738    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:01:04.243792    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:01:04.251530    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:01:04.251573    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:01:04.261521    5599 logs.go:284] 1 containers: [1e95c05b9ec9]
	I0731 04:01:04.261577    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:01:04.267732    5599 logs.go:284] 0 containers: []
	W0731 04:01:04.267737    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:01:04.267784    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:01:04.273571    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:01:04.273580    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:01:04.273583    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:01:04.282694    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:01:04.282699    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:01:04.309712    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:01:04.309717    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:01:04.309721    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:01:04.319467    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:01:04.319472    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:01:04.326455    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:01:04.326460    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:01:04.354841    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:01:04.354845    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:01:04.365755    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:01:04.365762    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:01:04.384308    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:01:04.384315    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:01:04.393029    5599 logs.go:123] Gathering logs for kube-controller-manager [1e95c05b9ec9] ...
	I0731 04:01:04.393036    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e95c05b9ec9"
	I0731 04:01:04.400207    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:01:04.400213    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:01:04.450227    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:01:04.450233    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:01:04.454883    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:01:04.454888    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:01:04.461373    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:01:04.461380    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:01:04.467766    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:01:04.467771    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:01:07.000907    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:01:07.019068    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:01:07.038648    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:01:07.038779    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:01:07.051659    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:01:07.051768    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:01:07.061939    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:01:07.062017    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:01:07.072401    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:01:07.072476    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:01:07.080071    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:01:07.080130    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:01:07.086567    5599 logs.go:284] 1 containers: [1e95c05b9ec9]
	I0731 04:01:07.086610    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:01:07.092461    5599 logs.go:284] 0 containers: []
	W0731 04:01:07.092469    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:01:07.092506    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:01:07.098046    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:01:07.098057    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:01:07.098060    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:01:07.110384    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:01:07.110390    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:01:07.140897    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:01:07.140903    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:01:07.140907    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:01:07.153834    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:01:07.153840    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:01:07.180937    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:01:07.180940    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:01:07.189879    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:01:07.189885    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:01:07.203724    5599 logs.go:123] Gathering logs for kube-controller-manager [1e95c05b9ec9] ...
	I0731 04:01:07.203731    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e95c05b9ec9"
	I0731 04:01:07.210400    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:01:07.210405    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:01:07.224783    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:01:07.224788    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:01:07.255222    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:01:07.255228    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:01:07.263659    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:01:07.263664    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:01:07.272342    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:01:07.272346    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:01:07.325877    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:01:07.325883    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:01:07.330882    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:01:07.330886    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:01:09.839722    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:01:09.857812    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:01:09.876855    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:01:09.876987    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:01:09.889667    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:01:09.889763    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:01:09.899836    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:01:09.899924    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:01:09.909004    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:01:09.909075    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:01:09.916251    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:01:09.916308    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:01:09.922789    5599 logs.go:284] 1 containers: [1e95c05b9ec9]
	I0731 04:01:09.922841    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:01:09.928788    5599 logs.go:284] 0 containers: []
	W0731 04:01:09.928793    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:01:09.928833    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:01:09.934705    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:01:09.934715    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:01:09.934719    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:01:09.941446    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:01:09.941450    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:01:09.949368    5599 logs.go:123] Gathering logs for kube-controller-manager [1e95c05b9ec9] ...
	I0731 04:01:09.949373    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e95c05b9ec9"
	I0731 04:01:09.955547    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:01:09.955551    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:01:09.967249    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:01:09.967256    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:01:10.021019    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:01:10.021024    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:01:10.043330    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:01:10.043335    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:01:10.052593    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:01:10.052600    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:01:10.089708    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:01:10.089714    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:01:10.118434    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:01:10.118440    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:01:10.145824    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:01:10.145829    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:01:10.145833    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:01:10.155487    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:01:10.155493    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:01:10.164762    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:01:10.164767    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:01:10.169515    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:01:10.169518    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:01:12.678233    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:01:12.697079    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:01:12.715168    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:01:12.715301    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:01:12.727424    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:01:12.727519    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:01:12.738182    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:01:12.738273    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:01:12.747031    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:01:12.747109    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:01:12.754153    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:01:12.754199    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:01:12.760712    5599 logs.go:284] 1 containers: [1e95c05b9ec9]
	I0731 04:01:12.760751    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:01:12.771595    5599 logs.go:284] 0 containers: []
	W0731 04:01:12.771601    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:01:12.771641    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:01:12.777303    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:01:12.777315    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:01:12.777319    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:01:12.782451    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:01:12.782455    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:01:12.794646    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:01:12.794651    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:01:12.800875    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:01:12.800881    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:01:12.812875    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:01:12.812882    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:01:12.840477    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:01:12.840482    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:01:12.840485    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:01:12.850130    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:01:12.850134    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:01:12.880968    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:01:12.880976    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:01:12.888601    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:01:12.888608    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:01:12.915338    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:01:12.915342    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:01:12.922143    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:01:12.922147    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:01:12.975938    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:01:12.975945    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:01:12.985455    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:01:12.985461    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:01:12.993439    5599 logs.go:123] Gathering logs for kube-controller-manager [1e95c05b9ec9] ...
	I0731 04:01:12.993445    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e95c05b9ec9"
	I0731 04:01:15.502341    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:01:15.520161    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:01:15.538596    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:01:15.538737    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:01:15.552064    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:01:15.552167    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:01:15.562876    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:01:15.562970    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:01:15.576562    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:01:15.576629    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:01:15.583819    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:01:15.583876    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:01:15.590141    5599 logs.go:284] 1 containers: [1e95c05b9ec9]
	I0731 04:01:15.590187    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:01:15.595783    5599 logs.go:284] 0 containers: []
	W0731 04:01:15.595789    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:01:15.595834    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:01:15.601331    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:01:15.601341    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:01:15.601344    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:01:15.613216    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:01:15.613223    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:01:15.644434    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:01:15.644440    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:01:15.655380    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:01:15.655387    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:01:15.667659    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:01:15.667665    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:01:15.677555    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:01:15.677561    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:01:15.684250    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:01:15.684255    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:01:15.690961    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:01:15.690965    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:01:15.717554    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:01:15.717557    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:01:15.722589    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:01:15.722594    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:01:15.749777    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:01:15.749782    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:01:15.749786    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:01:15.802191    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:01:15.802197    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:01:15.811674    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:01:15.811681    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:01:15.819459    5599 logs.go:123] Gathering logs for kube-controller-manager [1e95c05b9ec9] ...
	I0731 04:01:15.819465    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e95c05b9ec9"
	I0731 04:01:18.330479    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:01:18.349578    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:01:18.367664    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:01:18.367826    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:01:18.384196    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:01:18.384308    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:01:18.395134    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:01:18.395214    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:01:18.403491    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:01:18.403557    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:01:18.410636    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:01:18.410680    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:01:18.416838    5599 logs.go:284] 1 containers: [1e95c05b9ec9]
	I0731 04:01:18.416895    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:01:18.422701    5599 logs.go:284] 0 containers: []
	W0731 04:01:18.422705    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:01:18.422741    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:01:18.428652    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:01:18.428661    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:01:18.428664    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:01:18.483630    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:01:18.483634    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:01:18.493114    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:01:18.493120    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:01:18.501538    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:01:18.501544    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:01:18.508200    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:01:18.508206    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:01:18.539317    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:01:18.539323    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:01:18.567952    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:01:18.567956    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:01:18.594187    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:01:18.594196    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:01:18.594200    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:01:18.607473    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:01:18.607479    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:01:18.611872    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:01:18.611877    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:01:18.621599    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:01:18.621605    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:01:18.629812    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:01:18.629817    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:01:18.645202    5599 logs.go:123] Gathering logs for kube-controller-manager [1e95c05b9ec9] ...
	I0731 04:01:18.645207    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e95c05b9ec9"
	I0731 04:01:18.651468    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:01:18.651473    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:01:21.165238    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:01:21.184177    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:01:21.202983    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:01:21.203126    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:01:21.215459    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:01:21.215549    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:01:21.225786    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:01:21.225874    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:01:21.234363    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:01:21.234435    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:01:21.241422    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:01:21.241466    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:01:21.249441    5599 logs.go:284] 1 containers: [1e95c05b9ec9]
	I0731 04:01:21.249503    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:01:21.255757    5599 logs.go:284] 0 containers: []
	W0731 04:01:21.255762    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:01:21.255803    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:01:21.261464    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:01:21.261474    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:01:21.261477    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:01:21.288659    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:01:21.288664    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:01:21.288669    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:01:21.299040    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:01:21.299046    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:01:21.303656    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:01:21.303660    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:01:21.315579    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:01:21.315584    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:01:21.322248    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:01:21.322252    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:01:21.328788    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:01:21.328793    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:01:21.357879    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:01:21.357882    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:01:21.370662    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:01:21.370667    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:01:21.380597    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:01:21.380603    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:01:21.414233    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:01:21.414239    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:01:21.426340    5599 logs.go:123] Gathering logs for kube-controller-manager [1e95c05b9ec9] ...
	I0731 04:01:21.426346    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e95c05b9ec9"
	I0731 04:01:21.432932    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:01:21.432937    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:01:21.484674    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:01:21.484681    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:01:23.998724    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:01:24.016923    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 04:01:24.037621    5599 logs.go:284] 1 containers: [99a8476cf634]
	I0731 04:01:24.037727    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 04:01:24.049784    5599 logs.go:284] 2 containers: [403feb69e43e 45f684e22cf9]
	I0731 04:01:24.049858    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 04:01:24.059922    5599 logs.go:284] 1 containers: [fdc896c688a8]
	I0731 04:01:24.060031    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 04:01:24.073739    5599 logs.go:284] 2 containers: [6ac1b8ce929b 54adeabd1c3f]
	I0731 04:01:24.073798    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 04:01:24.080710    5599 logs.go:284] 1 containers: [669e0f6bc8c8]
	I0731 04:01:24.080767    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 04:01:24.087261    5599 logs.go:284] 1 containers: [1e95c05b9ec9]
	I0731 04:01:24.087310    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 04:01:24.092759    5599 logs.go:284] 0 containers: []
	W0731 04:01:24.092764    5599 logs.go:286] No container was found matching "kindnet"
	I0731 04:01:24.092807    5599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 04:01:24.098299    5599 logs.go:284] 1 containers: [0430a68e15f9]
	I0731 04:01:24.098308    5599 logs.go:123] Gathering logs for kubelet ...
	I0731 04:01:24.098311    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 04:01:24.151164    5599 logs.go:123] Gathering logs for dmesg ...
	I0731 04:01:24.151169    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 04:01:24.155861    5599 logs.go:123] Gathering logs for kube-proxy [669e0f6bc8c8] ...
	I0731 04:01:24.155865    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 669e0f6bc8c8"
	I0731 04:01:24.162990    5599 logs.go:123] Gathering logs for kube-controller-manager [1e95c05b9ec9] ...
	I0731 04:01:24.162996    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e95c05b9ec9"
	I0731 04:01:24.169348    5599 logs.go:123] Gathering logs for container status ...
	I0731 04:01:24.169354    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 04:01:24.180963    5599 logs.go:123] Gathering logs for kube-scheduler [6ac1b8ce929b] ...
	I0731 04:01:24.180970    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ac1b8ce929b"
	I0731 04:01:24.215264    5599 logs.go:123] Gathering logs for Docker ...
	I0731 04:01:24.215273    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 04:01:24.243789    5599 logs.go:123] Gathering logs for describe nodes ...
	I0731 04:01:24.243793    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 04:01:24.269731    5599 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 04:01:24.269736    5599 logs.go:123] Gathering logs for kube-apiserver [99a8476cf634] ...
	I0731 04:01:24.269740    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a8476cf634"
	I0731 04:01:24.282844    5599 logs.go:123] Gathering logs for etcd [403feb69e43e] ...
	I0731 04:01:24.282850    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403feb69e43e"
	I0731 04:01:24.295555    5599 logs.go:123] Gathering logs for etcd [45f684e22cf9] ...
	I0731 04:01:24.295562    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f684e22cf9"
	I0731 04:01:24.309032    5599 logs.go:123] Gathering logs for coredns [fdc896c688a8] ...
	I0731 04:01:24.309037    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdc896c688a8"
	I0731 04:01:24.315332    5599 logs.go:123] Gathering logs for kube-scheduler [54adeabd1c3f] ...
	I0731 04:01:24.315336    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54adeabd1c3f"
	I0731 04:01:24.326134    5599 logs.go:123] Gathering logs for storage-provisioner [0430a68e15f9] ...
	I0731 04:01:24.326138    5599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0430a68e15f9"
	I0731 04:01:26.835349    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:01:26.854250    5599 kubeadm.go:640] restartCluster took 4m1.412440625s
	W0731 04:01:26.854402    5599 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0731 04:01:26.854444    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 04:01:28.504394    5599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.64994425s)
	I0731 04:01:28.504454    5599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 04:01:28.509528    5599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 04:01:28.512680    5599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 04:01:28.515394    5599 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 04:01:28.515405    5599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 04:01:28.533714    5599 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0731 04:01:28.533739    5599 kubeadm.go:322] [preflight] Running pre-flight checks
	I0731 04:01:28.579144    5599 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 04:01:28.579193    5599 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 04:01:28.579247    5599 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 04:01:28.640688    5599 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 04:01:28.644663    5599 out.go:204]   - Generating certificates and keys ...
	I0731 04:01:28.644707    5599 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0731 04:01:28.644741    5599 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0731 04:01:28.644773    5599 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 04:01:28.644811    5599 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0731 04:01:28.644848    5599 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 04:01:28.644880    5599 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0731 04:01:28.644904    5599 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0731 04:01:28.644931    5599 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0731 04:01:28.644970    5599 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 04:01:28.645004    5599 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 04:01:28.645021    5599 kubeadm.go:322] [certs] Using the existing "sa" key
	I0731 04:01:28.645055    5599 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 04:01:28.829706    5599 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 04:01:29.055306    5599 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 04:01:29.108688    5599 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 04:01:29.147323    5599 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 04:01:29.154156    5599 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 04:01:29.154220    5599 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 04:01:29.154247    5599 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0731 04:01:29.236229    5599 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 04:01:29.245435    5599 out.go:204]   - Booting up control plane ...
	I0731 04:01:29.245509    5599 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 04:01:29.245541    5599 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 04:01:29.245586    5599 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 04:01:29.245629    5599 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 04:01:29.245710    5599 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 04:01:33.244003    5599 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.004553 seconds
	I0731 04:01:33.244407    5599 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 04:01:33.268567    5599 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 04:01:33.783205    5599 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 04:01:33.783335    5599 kubeadm.go:322] [mark-control-plane] Marking the node functional-652000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 04:01:34.297146    5599 kubeadm.go:322] [bootstrap-token] Using token: 6g4rvb.f8vw3t7etkd07ib2
	I0731 04:01:34.301657    5599 out.go:204]   - Configuring RBAC rules ...
	I0731 04:01:34.301801    5599 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 04:01:34.303636    5599 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 04:01:34.309899    5599 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 04:01:34.312295    5599 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 04:01:34.316422    5599 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 04:01:34.318352    5599 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 04:01:34.326243    5599 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 04:01:34.516039    5599 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0731 04:01:34.705989    5599 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0731 04:01:34.707561    5599 kubeadm.go:322] 
	I0731 04:01:34.707587    5599 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0731 04:01:34.707589    5599 kubeadm.go:322] 
	I0731 04:01:34.707624    5599 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0731 04:01:34.707626    5599 kubeadm.go:322] 
	I0731 04:01:34.707640    5599 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0731 04:01:34.707670    5599 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 04:01:34.707691    5599 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 04:01:34.707692    5599 kubeadm.go:322] 
	I0731 04:01:34.707717    5599 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0731 04:01:34.707722    5599 kubeadm.go:322] 
	I0731 04:01:34.707753    5599 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 04:01:34.707755    5599 kubeadm.go:322] 
	I0731 04:01:34.707780    5599 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0731 04:01:34.707813    5599 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 04:01:34.707852    5599 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 04:01:34.707853    5599 kubeadm.go:322] 
	I0731 04:01:34.707898    5599 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 04:01:34.707938    5599 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0731 04:01:34.707939    5599 kubeadm.go:322] 
	I0731 04:01:34.707983    5599 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8441 --token 6g4rvb.f8vw3t7etkd07ib2 \
	I0731 04:01:34.708038    5599 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:92b0556b3b6f92a8481720bc11b5f636fd40e93120aabd046ff70f77047ec2aa \
	I0731 04:01:34.708049    5599 kubeadm.go:322] 	--control-plane 
	I0731 04:01:34.708051    5599 kubeadm.go:322] 
	I0731 04:01:34.708089    5599 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0731 04:01:34.708090    5599 kubeadm.go:322] 
	I0731 04:01:34.708128    5599 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8441 --token 6g4rvb.f8vw3t7etkd07ib2 \
	I0731 04:01:34.708180    5599 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:92b0556b3b6f92a8481720bc11b5f636fd40e93120aabd046ff70f77047ec2aa 
	I0731 04:01:34.708639    5599 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 04:01:34.708653    5599 cni.go:84] Creating CNI manager for ""
	I0731 04:01:34.708659    5599 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:01:34.712382    5599 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 04:01:34.715373    5599 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 04:01:34.718661    5599 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0731 04:01:34.724347    5599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 04:01:34.724432    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35 minikube.k8s.io/name=functional-652000 minikube.k8s.io/updated_at=2023_07_31T04_01_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:34.724453    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:34.734150    5599 ops.go:34] apiserver oom_adj: -16
	I0731 04:01:34.775416    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:34.807046    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:35.339969    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:35.839915    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:36.338098    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:36.839979    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:37.339960    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:37.840021    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:38.339980    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:38.839998    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:39.340156    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:39.840246    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:40.340005    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:40.839944    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:41.339955    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:41.839866    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:42.339962    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:42.839919    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:43.338549    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:43.839946    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:44.338754    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:44.839955    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:45.339642    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:45.839969    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:46.339304    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:46.839982    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:47.339900    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:47.839885    5599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:01:47.928967    5599 kubeadm.go:1081] duration metric: took 13.204629417s to wait for elevateKubeSystemPrivileges.
	I0731 04:01:47.928979    5599 kubeadm.go:406] StartCluster complete in 4m22.499011834s
	I0731 04:01:47.928987    5599 settings.go:142] acquiring lock: {Name:mk7e2067b9c26be8d46dc95ba3a8a7ad946cadb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:01:47.929073    5599 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:01:47.929365    5599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/kubeconfig: {Name:mk98971837606256b8bab3d325e05dbfd512b496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:01:47.929543    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 04:01:47.929594    5599 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0731 04:01:47.929632    5599 addons.go:69] Setting storage-provisioner=true in profile "functional-652000"
	I0731 04:01:47.929639    5599 addons.go:231] Setting addon storage-provisioner=true in "functional-652000"
	W0731 04:01:47.929642    5599 addons.go:240] addon storage-provisioner should already be in state true
	I0731 04:01:47.929659    5599 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:01:47.929653    5599 addons.go:69] Setting default-storageclass=true in profile "functional-652000"
	I0731 04:01:47.929671    5599 host.go:66] Checking if "functional-652000" exists ...
	I0731 04:01:47.929676    5599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-652000"
	I0731 04:01:47.932570    5599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 04:01:47.936878    5599 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 04:01:47.936884    5599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 04:01:47.936891    5599 sshutil.go:53] new ssh client: &{IP:192.168.105.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/functional-652000/id_rsa Username:docker}
	I0731 04:01:47.942593    5599 addons.go:231] Setting addon default-storageclass=true in "functional-652000"
	W0731 04:01:47.942600    5599 addons.go:240] addon default-storageclass should already be in state true
	I0731 04:01:47.942612    5599 host.go:66] Checking if "functional-652000" exists ...
	I0731 04:01:47.943291    5599 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 04:01:47.943295    5599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 04:01:47.943301    5599 sshutil.go:53] new ssh client: &{IP:192.168.105.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/functional-652000/id_rsa Username:docker}
	I0731 04:01:47.949366    5599 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-652000" context rescaled to 1 replicas
	I0731 04:01:47.949383    5599 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.14 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:01:47.953734    5599 out.go:177] * Verifying Kubernetes components...
	I0731 04:01:47.963851    5599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 04:01:48.052179    5599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 04:01:48.075780    5599 node_ready.go:35] waiting up to 6m0s for node "functional-652000" to be "Ready" ...
	I0731 04:01:48.075948    5599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 04:01:48.077461    5599 node_ready.go:49] node "functional-652000" has status "Ready":"True"
	I0731 04:01:48.077465    5599 node_ready.go:38] duration metric: took 1.675042ms waiting for node "functional-652000" to be "Ready" ...
	I0731 04:01:48.077468    5599 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 04:01:48.085241    5599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-22sqt" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:48.095907    5599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 04:01:48.543554    5599 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0731 04:01:48.547187    5599 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0731 04:01:48.555278    5599 addons.go:502] enable addons completed in 625.699625ms: enabled=[default-storageclass storage-provisioner]
	I0731 04:01:49.095936    5599 pod_ready.go:92] pod "coredns-5d78c9869d-22sqt" in "kube-system" namespace has status "Ready":"True"
	I0731 04:01:49.095941    5599 pod_ready.go:81] duration metric: took 1.010696084s waiting for pod "coredns-5d78c9869d-22sqt" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:49.095945    5599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-cf5mq" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:49.099840    5599 pod_ready.go:92] pod "coredns-5d78c9869d-cf5mq" in "kube-system" namespace has status "Ready":"True"
	I0731 04:01:49.099842    5599 pod_ready.go:81] duration metric: took 3.89525ms waiting for pod "coredns-5d78c9869d-cf5mq" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:49.099845    5599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-652000" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:49.102106    5599 pod_ready.go:92] pod "etcd-functional-652000" in "kube-system" namespace has status "Ready":"True"
	I0731 04:01:49.102109    5599 pod_ready.go:81] duration metric: took 2.262ms waiting for pod "etcd-functional-652000" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:49.102111    5599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-652000" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:49.279540    5599 pod_ready.go:92] pod "kube-apiserver-functional-652000" in "kube-system" namespace has status "Ready":"True"
	I0731 04:01:49.279545    5599 pod_ready.go:81] duration metric: took 177.431167ms waiting for pod "kube-apiserver-functional-652000" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:49.279549    5599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-652000" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:49.681672    5599 pod_ready.go:92] pod "kube-controller-manager-functional-652000" in "kube-system" namespace has status "Ready":"True"
	I0731 04:01:49.681681    5599 pod_ready.go:81] duration metric: took 402.12775ms waiting for pod "kube-controller-manager-functional-652000" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:49.681688    5599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l59v7" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:50.085035    5599 pod_ready.go:92] pod "kube-proxy-l59v7" in "kube-system" namespace has status "Ready":"True"
	I0731 04:01:50.085058    5599 pod_ready.go:81] duration metric: took 403.362208ms waiting for pod "kube-proxy-l59v7" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:50.085076    5599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-652000" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:50.484891    5599 pod_ready.go:92] pod "kube-scheduler-functional-652000" in "kube-system" namespace has status "Ready":"True"
	I0731 04:01:50.484914    5599 pod_ready.go:81] duration metric: took 399.826458ms waiting for pod "kube-scheduler-functional-652000" in "kube-system" namespace to be "Ready" ...
	I0731 04:01:50.484929    5599 pod_ready.go:38] duration metric: took 2.407458833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 04:01:50.484971    5599 api_server.go:52] waiting for apiserver process to appear ...
	I0731 04:01:50.485275    5599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:01:50.502723    5599 api_server.go:72] duration metric: took 2.553329584s to wait for apiserver process to appear ...
	I0731 04:01:50.502734    5599 api_server.go:88] waiting for apiserver healthz status ...
	I0731 04:01:50.502748    5599 api_server.go:253] Checking apiserver healthz at https://192.168.105.14:8441/healthz ...
	I0731 04:01:50.511019    5599 api_server.go:279] https://192.168.105.14:8441/healthz returned 200:
	ok
	I0731 04:01:50.512942    5599 api_server.go:141] control plane version: v1.27.3
	I0731 04:01:50.512953    5599 api_server.go:131] duration metric: took 10.214459ms to wait for apiserver health ...
	I0731 04:01:50.512959    5599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 04:01:50.685812    5599 system_pods.go:59] 8 kube-system pods found
	I0731 04:01:50.685830    5599 system_pods.go:61] "coredns-5d78c9869d-22sqt" [a18de5d7-94ca-4421-a8fa-ba5bfee1a493] Running
	I0731 04:01:50.685835    5599 system_pods.go:61] "coredns-5d78c9869d-cf5mq" [ed66e670-7e5a-4221-9400-ca4ad3b82c97] Running
	I0731 04:01:50.685840    5599 system_pods.go:61] "etcd-functional-652000" [a81151ba-6d85-42d7-88c4-dce1447980fc] Running
	I0731 04:01:50.685845    5599 system_pods.go:61] "kube-apiserver-functional-652000" [6952f554-8aed-463d-bc33-8a252554d973] Running
	I0731 04:01:50.685850    5599 system_pods.go:61] "kube-controller-manager-functional-652000" [96a05527-d442-4d60-8791-cc73cd8b06c1] Running
	I0731 04:01:50.685854    5599 system_pods.go:61] "kube-proxy-l59v7" [8ca82601-c9f2-467d-b06b-f56ce7a709ea] Running
	I0731 04:01:50.685858    5599 system_pods.go:61] "kube-scheduler-functional-652000" [035d4ccb-f893-40de-97dd-8feb44e63a61] Running
	I0731 04:01:50.685861    5599 system_pods.go:61] "storage-provisioner" [4291671e-461c-4045-a8d8-0e34c8faaeb4] Running
	I0731 04:01:50.685866    5599 system_pods.go:74] duration metric: took 172.902666ms to wait for pod list to return data ...
	I0731 04:01:50.685872    5599 default_sa.go:34] waiting for default service account to be created ...
	I0731 04:01:50.885573    5599 default_sa.go:45] found service account: "default"
	I0731 04:01:50.885593    5599 default_sa.go:55] duration metric: took 199.715ms for default service account to be created ...
	I0731 04:01:50.885606    5599 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 04:01:51.086072    5599 system_pods.go:86] 8 kube-system pods found
	I0731 04:01:51.086085    5599 system_pods.go:89] "coredns-5d78c9869d-22sqt" [a18de5d7-94ca-4421-a8fa-ba5bfee1a493] Running
	I0731 04:01:51.086093    5599 system_pods.go:89] "coredns-5d78c9869d-cf5mq" [ed66e670-7e5a-4221-9400-ca4ad3b82c97] Running
	I0731 04:01:51.086098    5599 system_pods.go:89] "etcd-functional-652000" [a81151ba-6d85-42d7-88c4-dce1447980fc] Running
	I0731 04:01:51.086103    5599 system_pods.go:89] "kube-apiserver-functional-652000" [6952f554-8aed-463d-bc33-8a252554d973] Running
	I0731 04:01:51.086107    5599 system_pods.go:89] "kube-controller-manager-functional-652000" [96a05527-d442-4d60-8791-cc73cd8b06c1] Running
	I0731 04:01:51.086112    5599 system_pods.go:89] "kube-proxy-l59v7" [8ca82601-c9f2-467d-b06b-f56ce7a709ea] Running
	I0731 04:01:51.086116    5599 system_pods.go:89] "kube-scheduler-functional-652000" [035d4ccb-f893-40de-97dd-8feb44e63a61] Running
	I0731 04:01:51.086121    5599 system_pods.go:89] "storage-provisioner" [4291671e-461c-4045-a8d8-0e34c8faaeb4] Running
	I0731 04:01:51.086126    5599 system_pods.go:126] duration metric: took 200.5155ms to wait for k8s-apps to be running ...
	I0731 04:01:51.086130    5599 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 04:01:51.086270    5599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 04:01:51.098567    5599 system_svc.go:56] duration metric: took 12.431792ms WaitForService to wait for kubelet.
	I0731 04:01:51.098579    5599 kubeadm.go:581] duration metric: took 3.149190459s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 04:01:51.098597    5599 node_conditions.go:102] verifying NodePressure condition ...
	I0731 04:01:51.284245    5599 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0731 04:01:51.284290    5599 node_conditions.go:123] node cpu capacity is 2
	I0731 04:01:51.284321    5599 node_conditions.go:105] duration metric: took 185.719125ms to run NodePressure ...
	I0731 04:01:51.284338    5599 start.go:228] waiting for startup goroutines ...
	I0731 04:01:51.284348    5599 start.go:233] waiting for cluster config update ...
	I0731 04:01:51.284363    5599 start.go:242] writing updated cluster config ...
	I0731 04:01:51.285166    5599 ssh_runner.go:195] Run: rm -f paused
	I0731 04:01:51.339211    5599 start.go:596] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0731 04:01:51.344202    5599 out.go:177] * Done! kubectl is now configured to use "functional-652000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-07-31 10:55:52 UTC, ends at Mon 2023-07-31 11:02:38 UTC. --
	Jul 31 11:02:30 functional-652000 dockerd[6775]: time="2023-07-31T11:02:30.229995318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 11:02:30 functional-652000 dockerd[6775]: time="2023-07-31T11:02:30.230003277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:02:30 functional-652000 cri-dockerd[7039]: time="2023-07-31T11:02:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8bf99126da528bfb544f5356d2069484d39b463f0b35e6ad926371be14c0072c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 31 11:02:32 functional-652000 cri-dockerd[7039]: time="2023-07-31T11:02:32Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Jul 31 11:02:32 functional-652000 dockerd[6775]: time="2023-07-31T11:02:32.896886549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 11:02:32 functional-652000 dockerd[6775]: time="2023-07-31T11:02:32.896914466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:02:32 functional-652000 dockerd[6775]: time="2023-07-31T11:02:32.896925091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 11:02:32 functional-652000 dockerd[6775]: time="2023-07-31T11:02:32.896930258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:02:32 functional-652000 dockerd[6769]: time="2023-07-31T11:02:32.946354258Z" level=info msg="ignoring event" container=081676da71f5f0299f8580e43467c819ef0d44f40632aa1f0e436be35a001d5a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 11:02:32 functional-652000 dockerd[6775]: time="2023-07-31T11:02:32.946428509Z" level=info msg="shim disconnected" id=081676da71f5f0299f8580e43467c819ef0d44f40632aa1f0e436be35a001d5a namespace=moby
	Jul 31 11:02:32 functional-652000 dockerd[6775]: time="2023-07-31T11:02:32.946452634Z" level=warning msg="cleaning up after shim disconnected" id=081676da71f5f0299f8580e43467c819ef0d44f40632aa1f0e436be35a001d5a namespace=moby
	Jul 31 11:02:32 functional-652000 dockerd[6775]: time="2023-07-31T11:02:32.946467301Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 11:02:34 functional-652000 dockerd[6769]: time="2023-07-31T11:02:34.500860009Z" level=info msg="ignoring event" container=8bf99126da528bfb544f5356d2069484d39b463f0b35e6ad926371be14c0072c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 11:02:34 functional-652000 dockerd[6775]: time="2023-07-31T11:02:34.501289762Z" level=info msg="shim disconnected" id=8bf99126da528bfb544f5356d2069484d39b463f0b35e6ad926371be14c0072c namespace=moby
	Jul 31 11:02:34 functional-652000 dockerd[6775]: time="2023-07-31T11:02:34.501365888Z" level=warning msg="cleaning up after shim disconnected" id=8bf99126da528bfb544f5356d2069484d39b463f0b35e6ad926371be14c0072c namespace=moby
	Jul 31 11:02:34 functional-652000 dockerd[6775]: time="2023-07-31T11:02:34.501374054Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 11:02:34 functional-652000 dockerd[6775]: time="2023-07-31T11:02:34.511662994Z" level=warning msg="cleanup warnings time=\"2023-07-31T11:02:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 11:02:37 functional-652000 dockerd[6775]: time="2023-07-31T11:02:37.606266893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 11:02:37 functional-652000 dockerd[6775]: time="2023-07-31T11:02:37.606337227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:02:37 functional-652000 dockerd[6775]: time="2023-07-31T11:02:37.606358894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 11:02:37 functional-652000 dockerd[6775]: time="2023-07-31T11:02:37.606370894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:02:37 functional-652000 dockerd[6769]: time="2023-07-31T11:02:37.656546525Z" level=info msg="ignoring event" container=c053b09a2501420a790a0220b29ea5377df92f7431cc207a80b9d9701a3caa47 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 11:02:37 functional-652000 dockerd[6775]: time="2023-07-31T11:02:37.656771694Z" level=info msg="shim disconnected" id=c053b09a2501420a790a0220b29ea5377df92f7431cc207a80b9d9701a3caa47 namespace=moby
	Jul 31 11:02:37 functional-652000 dockerd[6775]: time="2023-07-31T11:02:37.656806319Z" level=warning msg="cleaning up after shim disconnected" id=c053b09a2501420a790a0220b29ea5377df92f7431cc207a80b9d9701a3caa47 namespace=moby
	Jul 31 11:02:37 functional-652000 dockerd[6775]: time="2023-07-31T11:02:37.656811277Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	c053b09a25014       72565bf5bbedf                                                                                         1 second ago         Exited              echoserver-arm            2                   263d190e53e1b
	081676da71f5f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 seconds ago        Exited              mount-munger              0                   8bf99126da528
	96556b3ee14af       72565bf5bbedf                                                                                         10 seconds ago       Exited              echoserver-arm            2                   365144f9e8f95
	e1a771708b06a       nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca                         23 seconds ago       Running             myfrontend                0                   e09fa379d6165
	50daf7b82fea7       nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                         38 seconds ago       Running             nginx                     0                   54e7c7e44d47f
	2f17d5cd98ddb       ba04bb24b9575                                                                                         50 seconds ago       Running             storage-provisioner       0                   a02c016b0cae8
	8b678dca30383       97e04611ad434                                                                                         51 seconds ago       Running             coredns                   0                   9f25398ffcd75
	89cce5f88e420       fb73e92641fd5                                                                                         51 seconds ago       Running             kube-proxy                0                   21388f71e61fc
	d7b44f68d80f5       bcb9e554eaab6                                                                                         About a minute ago   Running             kube-scheduler            0                   db3b396b4bd2c
	88db6ed01b0f6       24bc64e911039                                                                                         About a minute ago   Running             etcd                      0                   c1e77b216474c
	7444133fbc78c       39dfb036b0986                                                                                         About a minute ago   Running             kube-apiserver            0                   a01ce1ed24e8c
	b2fe0f99744b2       ab3683b584ae5                                                                                         About a minute ago   Running             kube-controller-manager   0                   88feac58fc3dd
	
	* 
	* ==> coredns [8b678dca3038] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51438 - 38513 "HINFO IN 7047150623571142802.5541251153883958909. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004565757s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-652000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-652000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35
	                    minikube.k8s.io/name=functional-652000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_31T04_01_34_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 11:01:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-652000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 11:02:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 11:02:35 +0000   Mon, 31 Jul 2023 11:01:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 11:02:35 +0000   Mon, 31 Jul 2023 11:01:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 11:02:35 +0000   Mon, 31 Jul 2023 11:01:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 11:02:35 +0000   Mon, 31 Jul 2023 11:01:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.14
	  Hostname:    functional-652000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 259c9aa36dcc48e0be3ada99559e404c
	  System UUID:                259c9aa36dcc48e0be3ada99559e404c
	  Boot ID:                    a0ae8366-4e45-412a-9bea-f71f8cb8e843
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-7b684b55f9-7cxn2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  default                     hello-node-connect-58d66798bb-5pg7d          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                 coredns-5d78c9869d-cf5mq                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     51s
	  kube-system                 etcd-functional-652000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         64s
	  kube-system                 kube-apiserver-functional-652000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-controller-manager-functional-652000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-proxy-l59v7                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kube-scheduler-functional-652000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 69s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node functional-652000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node functional-652000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     69s (x7 over 69s)  kubelet          Node functional-652000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  64s                kubelet          Node functional-652000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s                kubelet          Node functional-652000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s                kubelet          Node functional-652000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node functional-652000 event: Registered Node functional-652000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +1.287262] kauditd_printk_skb: 28 callbacks suppressed
	[ +10.100522] systemd-fstab-generator[4474]: Ignoring "noauto" for root device
	[  +0.087236] systemd-fstab-generator[4486]: Ignoring "noauto" for root device
	[  +0.084435] systemd-fstab-generator[4497]: Ignoring "noauto" for root device
	[  +0.077620] systemd-fstab-generator[4508]: Ignoring "noauto" for root device
	[  +0.090977] systemd-fstab-generator[4579]: Ignoring "noauto" for root device
	[  +7.035055] kauditd_printk_skb: 29 callbacks suppressed
	[Jul31 10:57] systemd-fstab-generator[6309]: Ignoring "noauto" for root device
	[  +0.144114] systemd-fstab-generator[6343]: Ignoring "noauto" for root device
	[  +0.107770] systemd-fstab-generator[6354]: Ignoring "noauto" for root device
	[  +0.101135] systemd-fstab-generator[6367]: Ignoring "noauto" for root device
	[ +11.518202] systemd-fstab-generator[6919]: Ignoring "noauto" for root device
	[  +0.082076] systemd-fstab-generator[6930]: Ignoring "noauto" for root device
	[  +0.083259] systemd-fstab-generator[6950]: Ignoring "noauto" for root device
	[  +0.079166] systemd-fstab-generator[6961]: Ignoring "noauto" for root device
	[  +0.085367] systemd-fstab-generator[7032]: Ignoring "noauto" for root device
	[  +1.091201] systemd-fstab-generator[7282]: Ignoring "noauto" for root device
	[Jul31 11:01] systemd-fstab-generator[18364]: Ignoring "noauto" for root device
	[  +5.188057] systemd-fstab-generator[18961]: Ignoring "noauto" for root device
	[ +13.471281] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.308223] kauditd_printk_skb: 10 callbacks suppressed
	[Jul31 11:02] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.656968] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +11.755832] kauditd_printk_skb: 1 callbacks suppressed
	[  +9.030020] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [88db6ed01b0f] <==
	* {"level":"info","ts":"2023-07-31T11:01:30.410Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e4ea7eefef298ba3","local-member-id":"5879b7a6a668c5bf","added-peer-id":"5879b7a6a668c5bf","added-peer-peer-urls":["https://192.168.105.14:2380"]}
	{"level":"info","ts":"2023-07-31T11:01:30.413Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-31T11:01:30.413Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"5879b7a6a668c5bf","initial-advertise-peer-urls":["https://192.168.105.14:2380"],"listen-peer-urls":["https://192.168.105.14:2380"],"advertise-client-urls":["https://192.168.105.14:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.14:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-31T11:01:30.413Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-31T11:01:30.413Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.14:2380"}
	{"level":"info","ts":"2023-07-31T11:01:30.413Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.14:2380"}
	{"level":"info","ts":"2023-07-31T11:01:31.010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5879b7a6a668c5bf is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-31T11:01:31.010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5879b7a6a668c5bf became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-31T11:01:31.010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5879b7a6a668c5bf received MsgPreVoteResp from 5879b7a6a668c5bf at term 1"}
	{"level":"info","ts":"2023-07-31T11:01:31.010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5879b7a6a668c5bf became candidate at term 2"}
	{"level":"info","ts":"2023-07-31T11:01:31.010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5879b7a6a668c5bf received MsgVoteResp from 5879b7a6a668c5bf at term 2"}
	{"level":"info","ts":"2023-07-31T11:01:31.010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5879b7a6a668c5bf became leader at term 2"}
	{"level":"info","ts":"2023-07-31T11:01:31.010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5879b7a6a668c5bf elected leader 5879b7a6a668c5bf at term 2"}
	{"level":"info","ts":"2023-07-31T11:01:31.018Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"5879b7a6a668c5bf","local-member-attributes":"{Name:functional-652000 ClientURLs:[https://192.168.105.14:2379]}","request-path":"/0/members/5879b7a6a668c5bf/attributes","cluster-id":"e4ea7eefef298ba3","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-31T11:01:31.018Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T11:01:31.019Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.14:2379"}
	{"level":"info","ts":"2023-07-31T11:01:31.019Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T11:01:31.019Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T11:01:31.019Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e4ea7eefef298ba3","local-member-id":"5879b7a6a668c5bf","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T11:01:31.019Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T11:01:31.019Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T11:01:31.019Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-31T11:01:31.019Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-31T11:01:31.031Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-31T11:02:07.223Z","caller":"traceutil/trace.go:171","msg":"trace[1775408585] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"111.711596ms","start":"2023-07-31T11:02:07.112Z","end":"2023-07-31T11:02:07.223Z","steps":["trace[1775408585] 'process raft request'  (duration: 111.628013ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  11:02:38 up 6 min,  0 users,  load average: 0.78, 0.42, 0.19
	Linux functional-652000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7444133fbc78] <==
	* I0731 11:01:32.636564       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0731 11:01:32.643086       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0731 11:01:32.643105       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 11:01:32.801668       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 11:01:32.811692       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 11:01:32.882979       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0731 11:01:32.885030       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.14]
	I0731 11:01:32.885365       1 controller.go:624] quota admission added evaluator for: endpoints
	I0731 11:01:32.886956       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 11:01:33.686818       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0731 11:01:34.486331       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0731 11:01:34.490528       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0731 11:01:34.502207       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	E0731 11:01:41.722381       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[workload-high workload-low global-default catch-all system node-high leader-election] items=[{target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:24 lowerBound:24 upperBound:649} {target:NaN lowerBound:13 upperBound:613} {target:50 lowerBound:50 upperBound:674} {target:73 lowerBound:73 upperBound:698} {target:25 lowerBound:25 upperBound:625}]
	I0731 11:01:47.291386       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0731 11:01:47.436556       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0731 11:01:51.722777       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[catch-all system node-high leader-election workload-high workload-low global-default] items=[{target:NaN lowerBound:13 upperBound:613} {target:50 lowerBound:50 upperBound:674} {target:73 lowerBound:73 upperBound:698} {target:25 lowerBound:25 upperBound:625} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:24 lowerBound:24 upperBound:649}]
	I0731 11:01:52.660751       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs=map[IPv4:10.98.85.61]
	I0731 11:01:57.382073       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.100.124.5]
	E0731 11:02:01.723522       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[catch-all system node-high leader-election workload-high workload-low global-default] items=[{target:NaN lowerBound:13 upperBound:613} {target:50 lowerBound:50 upperBound:674} {target:73 lowerBound:73 upperBound:698} {target:25 lowerBound:25 upperBound:625} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:24 lowerBound:24 upperBound:649}]
	I0731 11:02:07.855399       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.100.199.252]
	E0731 11:02:11.723808       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[catch-all system node-high leader-election workload-high workload-low global-default] items=[{target:NaN lowerBound:13 upperBound:613} {target:50 lowerBound:50 upperBound:674} {target:73 lowerBound:73 upperBound:698} {target:25 lowerBound:25 upperBound:625} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:24 lowerBound:24 upperBound:649}]
	I0731 11:02:21.303316       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.101.176.170]
	E0731 11:02:21.724702       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[catch-all system node-high leader-election workload-high workload-low global-default] items=[{target:NaN lowerBound:13 upperBound:613} {target:50 lowerBound:50 upperBound:674} {target:73 lowerBound:73 upperBound:698} {target:25 lowerBound:25 upperBound:625} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:24 lowerBound:24 upperBound:649}]
	E0731 11:02:31.725069       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[workload-high workload-low global-default catch-all system node-high leader-election] items=[{target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:24 lowerBound:24 upperBound:649} {target:NaN lowerBound:13 upperBound:613} {target:50 lowerBound:50 upperBound:674} {target:73 lowerBound:73 upperBound:698} {target:25 lowerBound:25 upperBound:625}]
	
	* 
	* ==> kube-controller-manager [b2fe0f99744b] <==
	* I0731 11:01:47.318541       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0731 11:01:47.322205       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0731 11:01:47.327045       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0731 11:01:47.327075       1 shared_informer.go:318] Caches are synced for daemon sets
	I0731 11:01:47.327084       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0731 11:01:47.327094       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0731 11:01:47.328998       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0731 11:01:47.332495       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-cf5mq"
	I0731 11:01:47.382638       1 shared_informer.go:318] Caches are synced for service account
	I0731 11:01:47.385826       1 shared_informer.go:318] Caches are synced for resource quota
	I0731 11:01:47.387793       1 shared_informer.go:318] Caches are synced for namespace
	I0731 11:01:47.411671       1 shared_informer.go:318] Caches are synced for resource quota
	I0731 11:01:47.432300       1 shared_informer.go:318] Caches are synced for attach detach
	I0731 11:01:47.439428       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-l59v7"
	I0731 11:01:47.845568       1 shared_informer.go:318] Caches are synced for garbage collector
	I0731 11:01:47.910273       1 shared_informer.go:318] Caches are synced for garbage collector
	I0731 11:01:47.910291       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0731 11:01:47.924729       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0731 11:01:47.942338       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-22sqt"
	I0731 11:02:02.259269       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0731 11:02:02.259290       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0731 11:02:07.810960       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-58d66798bb to 1"
	I0731 11:02:07.815857       1 event.go:307] "Event occurred" object="default/hello-node-connect-58d66798bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-58d66798bb-5pg7d"
	I0731 11:02:21.261874       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-7b684b55f9 to 1"
	I0731 11:02:21.264646       1 event.go:307] "Event occurred" object="default/hello-node-7b684b55f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-7b684b55f9-7cxn2"
	
	* 
	* ==> kube-proxy [89cce5f88e42] <==
	* I0731 11:01:48.136009       1 node.go:141] Successfully retrieved node IP: 192.168.105.14
	I0731 11:01:48.136037       1 server_others.go:110] "Detected node IP" address="192.168.105.14"
	I0731 11:01:48.136049       1 server_others.go:554] "Using iptables proxy"
	I0731 11:01:48.155052       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0731 11:01:48.155061       1 server_others.go:192] "Using iptables Proxier"
	I0731 11:01:48.155078       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 11:01:48.155271       1 server.go:658] "Version info" version="v1.27.3"
	I0731 11:01:48.155275       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 11:01:48.155613       1 config.go:188] "Starting service config controller"
	I0731 11:01:48.155617       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0731 11:01:48.155629       1 config.go:97] "Starting endpoint slice config controller"
	I0731 11:01:48.155631       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0731 11:01:48.155702       1 config.go:315] "Starting node config controller"
	I0731 11:01:48.155704       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0731 11:01:48.255747       1 shared_informer.go:318] Caches are synced for node config
	I0731 11:01:48.255772       1 shared_informer.go:318] Caches are synced for service config
	I0731 11:01:48.255785       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [d7b44f68d80f] <==
	* W0731 11:01:31.687854       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 11:01:31.687858       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 11:01:31.687875       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 11:01:31.687882       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 11:01:31.688915       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 11:01:31.688923       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 11:01:31.688935       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 11:01:31.688939       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 11:01:31.688958       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 11:01:31.688962       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 11:01:31.688985       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 11:01:31.688992       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 11:01:31.689010       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 11:01:31.689014       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 11:01:31.689027       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 11:01:31.689030       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 11:01:31.688919       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 11:01:31.689065       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 11:01:32.510526       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 11:01:32.510705       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 11:01:32.567258       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 11:01:32.567325       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 11:01:32.633444       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 11:01:32.633845       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0731 11:01:33.187152       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-31 10:55:52 UTC, ends at Mon 2023-07-31 11:02:38 UTC. --
	Jul 31 11:02:23 functional-652000 kubelet[18967]: I0731 11:02:23.182348   18967 scope.go:115] "RemoveContainer" containerID="b617574ee9aac7af7f885bbef56249095ce792ef96adaea318fc5ed28c90e8a7"
	Jul 31 11:02:23 functional-652000 kubelet[18967]: E0731 11:02:23.182752   18967 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-7cxn2_default(c6a5cdab-4b5d-47ca-a596-cdc64e131dc8)\"" pod="default/hello-node-7b684b55f9-7cxn2" podUID=c6a5cdab-4b5d-47ca-a596-cdc64e131dc8
	Jul 31 11:02:28 functional-652000 kubelet[18967]: I0731 11:02:28.546789   18967 scope.go:115] "RemoveContainer" containerID="58a28839528a56943242bc13708b8c2fdf05bf4f41fb002e6055591ac1376149"
	Jul 31 11:02:29 functional-652000 kubelet[18967]: I0731 11:02:29.280258   18967 scope.go:115] "RemoveContainer" containerID="58a28839528a56943242bc13708b8c2fdf05bf4f41fb002e6055591ac1376149"
	Jul 31 11:02:29 functional-652000 kubelet[18967]: I0731 11:02:29.280601   18967 scope.go:115] "RemoveContainer" containerID="96556b3ee14af7dd1ceae7b43e5ff8a7f5f55efd688d01ba50651b9a64c738b7"
	Jul 31 11:02:29 functional-652000 kubelet[18967]: E0731 11:02:29.282795   18967 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-5pg7d_default(83b14538-36e7-4598-afb6-9149c3fff7d9)\"" pod="default/hello-node-connect-58d66798bb-5pg7d" podUID=83b14538-36e7-4598-afb6-9149c3fff7d9
	Jul 31 11:02:29 functional-652000 kubelet[18967]: I0731 11:02:29.859810   18967 topology_manager.go:212] "Topology Admit Handler"
	Jul 31 11:02:29 functional-652000 kubelet[18967]: I0731 11:02:29.888059   18967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ad9b6d0d-34bb-467b-85c6-532f859156e5-test-volume\") pod \"busybox-mount\" (UID: \"ad9b6d0d-34bb-467b-85c6-532f859156e5\") " pod="default/busybox-mount"
	Jul 31 11:02:29 functional-652000 kubelet[18967]: I0731 11:02:29.888086   18967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k8fl\" (UniqueName: \"kubernetes.io/projected/ad9b6d0d-34bb-467b-85c6-532f859156e5-kube-api-access-5k8fl\") pod \"busybox-mount\" (UID: \"ad9b6d0d-34bb-467b-85c6-532f859156e5\") " pod="default/busybox-mount"
	Jul 31 11:02:30 functional-652000 kubelet[18967]: I0731 11:02:30.335207   18967 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bf99126da528bfb544f5356d2069484d39b463f0b35e6ad926371be14c0072c"
	Jul 31 11:02:34 functional-652000 kubelet[18967]: I0731 11:02:34.538761   18967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k8fl\" (UniqueName: \"kubernetes.io/projected/ad9b6d0d-34bb-467b-85c6-532f859156e5-kube-api-access-5k8fl\") pod \"ad9b6d0d-34bb-467b-85c6-532f859156e5\" (UID: \"ad9b6d0d-34bb-467b-85c6-532f859156e5\") "
	Jul 31 11:02:34 functional-652000 kubelet[18967]: I0731 11:02:34.538997   18967 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ad9b6d0d-34bb-467b-85c6-532f859156e5-test-volume\") pod \"ad9b6d0d-34bb-467b-85c6-532f859156e5\" (UID: \"ad9b6d0d-34bb-467b-85c6-532f859156e5\") "
	Jul 31 11:02:34 functional-652000 kubelet[18967]: I0731 11:02:34.539046   18967 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad9b6d0d-34bb-467b-85c6-532f859156e5-test-volume" (OuterVolumeSpecName: "test-volume") pod "ad9b6d0d-34bb-467b-85c6-532f859156e5" (UID: "ad9b6d0d-34bb-467b-85c6-532f859156e5"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 31 11:02:34 functional-652000 kubelet[18967]: I0731 11:02:34.541538   18967 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9b6d0d-34bb-467b-85c6-532f859156e5-kube-api-access-5k8fl" (OuterVolumeSpecName: "kube-api-access-5k8fl") pod "ad9b6d0d-34bb-467b-85c6-532f859156e5" (UID: "ad9b6d0d-34bb-467b-85c6-532f859156e5"). InnerVolumeSpecName "kube-api-access-5k8fl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 11:02:34 functional-652000 kubelet[18967]: E0731 11:02:34.550658   18967 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 31 11:02:34 functional-652000 kubelet[18967]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 11:02:34 functional-652000 kubelet[18967]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 11:02:34 functional-652000 kubelet[18967]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 31 11:02:34 functional-652000 kubelet[18967]: I0731 11:02:34.639898   18967 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ad9b6d0d-34bb-467b-85c6-532f859156e5-test-volume\") on node \"functional-652000\" DevicePath \"\""
	Jul 31 11:02:34 functional-652000 kubelet[18967]: I0731 11:02:34.639944   18967 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5k8fl\" (UniqueName: \"kubernetes.io/projected/ad9b6d0d-34bb-467b-85c6-532f859156e5-kube-api-access-5k8fl\") on node \"functional-652000\" DevicePath \"\""
	Jul 31 11:02:35 functional-652000 kubelet[18967]: I0731 11:02:35.413787   18967 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bf99126da528bfb544f5356d2069484d39b463f0b35e6ad926371be14c0072c"
	Jul 31 11:02:37 functional-652000 kubelet[18967]: I0731 11:02:37.547864   18967 scope.go:115] "RemoveContainer" containerID="b617574ee9aac7af7f885bbef56249095ce792ef96adaea318fc5ed28c90e8a7"
	Jul 31 11:02:38 functional-652000 kubelet[18967]: I0731 11:02:38.440686   18967 scope.go:115] "RemoveContainer" containerID="b617574ee9aac7af7f885bbef56249095ce792ef96adaea318fc5ed28c90e8a7"
	Jul 31 11:02:38 functional-652000 kubelet[18967]: I0731 11:02:38.441029   18967 scope.go:115] "RemoveContainer" containerID="c053b09a2501420a790a0220b29ea5377df92f7431cc207a80b9d9701a3caa47"
	Jul 31 11:02:38 functional-652000 kubelet[18967]: E0731 11:02:38.441156   18967 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-7cxn2_default(c6a5cdab-4b5d-47ca-a596-cdc64e131dc8)\"" pod="default/hello-node-7b684b55f9-7cxn2" podUID=c6a5cdab-4b5d-47ca-a596-cdc64e131dc8
	
	* 
	* ==> storage-provisioner [2f17d5cd98dd] <==
	* I0731 11:01:49.022731       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 11:01:49.026525       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 11:01:49.026540       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 11:01:49.029474       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 11:01:49.029599       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-652000_8ff442cc-1086-4778-913c-56eb7de5e1e9!
	I0731 11:01:49.029985       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3c9d9f2f-0346-4e7c-9ee8-ff8e27ba0250", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-652000_8ff442cc-1086-4778-913c-56eb7de5e1e9 became leader
	I0731 11:01:49.129727       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-652000_8ff442cc-1086-4778-913c-56eb7de5e1e9!
	I0731 11:02:02.258897       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0731 11:02:02.259040       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    d04c9d8c-b998-410c-a55a-61ee4c16a925 396 0 2023-07-31 11:01:48 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-07-31 11:01:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-314f671c-93f7-4784-b414-2ac14e1558ed &PersistentVolumeClaim{ObjectMeta:{myclaim  default  314f671c-93f7-4784-b414-2ac14e1558ed 488 0 2023-07-31 11:02:02 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-07-31 11:02:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-07-31 11:02:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0731 11:02:02.260660       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-314f671c-93f7-4784-b414-2ac14e1558ed" provisioned
	I0731 11:02:02.260702       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0731 11:02:02.260731       1 volume_store.go:212] Trying to save persistentvolume "pvc-314f671c-93f7-4784-b414-2ac14e1558ed"
	I0731 11:02:02.261171       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"314f671c-93f7-4784-b414-2ac14e1558ed", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0731 11:02:02.266710       1 volume_store.go:219] persistentvolume "pvc-314f671c-93f7-4784-b414-2ac14e1558ed" saved
	I0731 11:02:02.266843       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"314f671c-93f7-4784-b414-2ac14e1558ed", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-314f671c-93f7-4784-b414-2ac14e1558ed
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-652000 -n functional-652000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-652000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-652000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-652000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-652000/192.168.105.14
	Start Time:       Mon, 31 Jul 2023 04:02:29 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  mount-munger:
	    Container ID:  docker://081676da71f5f0299f8580e43467c819ef0d44f40632aa1f0e436be35a001d5a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 31 Jul 2023 04:02:32 -0700
	      Finished:     Mon, 31 Jul 2023 04:02:32 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5k8fl (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5k8fl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10s   default-scheduler  Successfully assigned default/busybox-mount to functional-652000
	  Normal  Pulling    9s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     7s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.492522122s (2.492531288s including waiting)
	  Normal  Created    7s    kubelet            Created container mount-munger
	  Normal  Started    7s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (31.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-652000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-652000 image ls --format short --alsologtostderr:
I0731 04:03:05.333027    6058 out.go:296] Setting OutFile to fd 1 ...
I0731 04:03:05.333164    6058 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 04:03:05.333169    6058 out.go:309] Setting ErrFile to fd 2...
I0731 04:03:05.333172    6058 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 04:03:05.333283    6058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
I0731 04:03:05.333687    6058 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0731 04:03:05.333746    6058 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
W0731 04:03:05.333983    6058 cache_images.go:695] error getting status for functional-652000: state: connect: dial unix /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/functional-652000/monitor: connect: connection refused
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-484000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-484000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 402ab4d7d928
	Removing intermediate container 402ab4d7d928
	 ---> f3cc547f0785
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 485dc42870a0
	Removing intermediate container 485dc42870a0
	 ---> 98a3600d4a4b
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 87fbc254ad64
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-484000 -n image-484000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-484000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-652000                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-652000                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| image          | functional-652000 image load --daemon                    | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-652000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-652000 image ls                               | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	| image          | functional-652000 image load --daemon                    | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-652000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-652000 image ls                               | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:02 PDT | 31 Jul 23 04:02 PDT |
	| image          | functional-652000 image load --daemon                    | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-652000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-652000 image ls                               | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	| image          | functional-652000 image save                             | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-652000 |                   |         |         |                     |                     |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-652000 image rm                               | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-652000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-652000 image ls                               | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	| image          | functional-652000 image load                             | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-652000 image ls                               | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	| image          | functional-652000 image save --daemon                    | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-652000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-652000                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-652000                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-652000 ssh pgrep                              | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT |                     |
	|                | buildkitd                                                |                   |         |         |                     |                     |
	| image          | functional-652000                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|                | image ls --format json                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-652000 image build -t                         | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|                | localhost/my-image:functional-652000                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                   |         |         |                     |                     |
	| image          | functional-652000                                        | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|                | image ls --format table                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-652000 image ls                               | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	| delete         | -p functional-652000                                     | functional-652000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	| start          | -p image-484000 --driver=qemu2                           | image-484000      | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|                |                                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-484000      | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|                | ./testdata/image-build/test-normal                       |                   |         |         |                     |                     |
	|                | -p image-484000                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-484000      | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                 |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                       |                   |         |         |                     |                     |
	|                | image-484000                                             |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 04:03:08
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 04:03:08.597565    6083 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:03:08.597665    6083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:03:08.597666    6083 out.go:309] Setting ErrFile to fd 2...
	I0731 04:03:08.597668    6083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:03:08.597777    6083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:03:08.598769    6083 out.go:303] Setting JSON to false
	I0731 04:03:08.615032    6083 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9159,"bootTime":1690792229,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:03:08.615085    6083 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:03:08.619111    6083 out.go:177] * [image-484000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:03:08.626141    6083 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:03:08.626190    6083 notify.go:220] Checking for updates...
	I0731 04:03:08.633091    6083 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:03:08.636170    6083 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:03:08.639060    6083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:03:08.642106    6083 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:03:08.645124    6083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:03:08.648193    6083 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:03:08.652100    6083 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:03:08.659076    6083 start.go:298] selected driver: qemu2
	I0731 04:03:08.659078    6083 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:03:08.659082    6083 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:03:08.659144    6083 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:03:08.662104    6083 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:03:08.667277    6083 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 04:03:08.667360    6083 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 04:03:08.667373    6083 cni.go:84] Creating CNI manager for ""
	I0731 04:03:08.667385    6083 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:03:08.667388    6083 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:03:08.667396    6083 start_flags.go:319] config:
	{Name:image-484000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:image-484000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:03:08.671626    6083 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:03:08.674100    6083 out.go:177] * Starting control plane node image-484000 in cluster image-484000
	I0731 04:03:08.681936    6083 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:03:08.681965    6083 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:03:08.681978    6083 cache.go:57] Caching tarball of preloaded images
	I0731 04:03:08.682042    6083 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:03:08.682046    6083 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:03:08.682255    6083 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/config.json ...
	I0731 04:03:08.682266    6083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/config.json: {Name:mkb7dcfd9384b2fbeaaf5e74c97747580474754a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:03:08.682480    6083 start.go:365] acquiring machines lock for image-484000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:03:08.682510    6083 start.go:369] acquired machines lock for "image-484000" in 25.958µs
	I0731 04:03:08.682522    6083 start.go:93] Provisioning new machine with config: &{Name:image-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:image-484000 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:03:08.682555    6083 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:03:08.689068    6083 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 04:03:08.710851    6083 start.go:159] libmachine.API.Create for "image-484000" (driver="qemu2")
	I0731 04:03:08.710866    6083 client.go:168] LocalClient.Create starting
	I0731 04:03:08.710928    6083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:03:08.710956    6083 main.go:141] libmachine: Decoding PEM data...
	I0731 04:03:08.710965    6083 main.go:141] libmachine: Parsing certificate...
	I0731 04:03:08.711000    6083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:03:08.711013    6083 main.go:141] libmachine: Decoding PEM data...
	I0731 04:03:08.711022    6083 main.go:141] libmachine: Parsing certificate...
	I0731 04:03:08.711309    6083 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:03:08.829430    6083 main.go:141] libmachine: Creating SSH key...
	I0731 04:03:09.019400    6083 main.go:141] libmachine: Creating Disk image...
	I0731 04:03:09.019407    6083 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:03:09.019584    6083 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/disk.qcow2
	I0731 04:03:09.029040    6083 main.go:141] libmachine: STDOUT: 
	I0731 04:03:09.029061    6083 main.go:141] libmachine: STDERR: 
	I0731 04:03:09.029122    6083 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/disk.qcow2 +20000M
	I0731 04:03:09.036481    6083 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:03:09.036490    6083 main.go:141] libmachine: STDERR: 
	I0731 04:03:09.036513    6083 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/disk.qcow2
	I0731 04:03:09.036516    6083 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:03:09.036558    6083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:06:2e:0b:f5:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/disk.qcow2
	I0731 04:03:09.071348    6083 main.go:141] libmachine: STDOUT: 
	I0731 04:03:09.071375    6083 main.go:141] libmachine: STDERR: 
	I0731 04:03:09.071379    6083 main.go:141] libmachine: Attempt 0
	I0731 04:03:09.071390    6083 main.go:141] libmachine: Searching for 1a:6:2e:b:f5:24 in /var/db/dhcpd_leases ...
	I0731 04:03:09.071475    6083 main.go:141] libmachine: Found 13 entries in /var/db/dhcpd_leases!
	I0731 04:03:09.071494    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:82:6b:f6:39:9a:25 ID:1,82:6b:f6:39:9a:25 Lease:0x64c8e4b8}
	I0731 04:03:09.071502    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:c6:ad:93:9d:98:1b ID:1,c6:ad:93:9d:98:1b Lease:0x64c7932c}
	I0731 04:03:09.071506    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:22:a:47:ac:f4:b8 ID:1,22:a:47:ac:f4:b8 Lease:0x64c8e463}
	I0731 04:03:09.071510    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 04:03:09.071514    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 04:03:09.071519    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 04:03:09.071523    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 04:03:09.071527    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 04:03:09.071536    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 04:03:09.071541    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 04:03:09.071545    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 04:03:09.071552    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 04:03:09.071557    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 04:03:11.073664    6083 main.go:141] libmachine: Attempt 1
	I0731 04:03:11.073721    6083 main.go:141] libmachine: Searching for 1a:6:2e:b:f5:24 in /var/db/dhcpd_leases ...
	I0731 04:03:11.074232    6083 main.go:141] libmachine: Found 13 entries in /var/db/dhcpd_leases!
	I0731 04:03:11.074272    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:82:6b:f6:39:9a:25 ID:1,82:6b:f6:39:9a:25 Lease:0x64c8e4b8}
	I0731 04:03:11.074299    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:c6:ad:93:9d:98:1b ID:1,c6:ad:93:9d:98:1b Lease:0x64c7932c}
	I0731 04:03:11.074324    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:22:a:47:ac:f4:b8 ID:1,22:a:47:ac:f4:b8 Lease:0x64c8e463}
	I0731 04:03:11.074349    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 04:03:11.074372    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 04:03:11.074397    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 04:03:11.074420    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 04:03:11.074443    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 04:03:11.074468    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 04:03:11.074492    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 04:03:11.074515    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 04:03:11.074538    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 04:03:11.074561    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 04:03:13.076673    6083 main.go:141] libmachine: Attempt 2
	I0731 04:03:13.076688    6083 main.go:141] libmachine: Searching for 1a:6:2e:b:f5:24 in /var/db/dhcpd_leases ...
	I0731 04:03:13.076801    6083 main.go:141] libmachine: Found 13 entries in /var/db/dhcpd_leases!
	I0731 04:03:13.076811    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:82:6b:f6:39:9a:25 ID:1,82:6b:f6:39:9a:25 Lease:0x64c8e4b8}
	I0731 04:03:13.076816    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:c6:ad:93:9d:98:1b ID:1,c6:ad:93:9d:98:1b Lease:0x64c7932c}
	I0731 04:03:13.076820    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:22:a:47:ac:f4:b8 ID:1,22:a:47:ac:f4:b8 Lease:0x64c8e463}
	I0731 04:03:13.076824    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 04:03:13.076828    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 04:03:13.076832    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 04:03:13.076836    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 04:03:13.076840    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 04:03:13.076861    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 04:03:13.076865    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 04:03:13.076869    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 04:03:13.076872    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 04:03:13.076876    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 04:03:15.078861    6083 main.go:141] libmachine: Attempt 3
	I0731 04:03:15.078866    6083 main.go:141] libmachine: Searching for 1a:6:2e:b:f5:24 in /var/db/dhcpd_leases ...
	I0731 04:03:15.078943    6083 main.go:141] libmachine: Found 13 entries in /var/db/dhcpd_leases!
	I0731 04:03:15.078948    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:82:6b:f6:39:9a:25 ID:1,82:6b:f6:39:9a:25 Lease:0x64c8e4b8}
	I0731 04:03:15.078952    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:c6:ad:93:9d:98:1b ID:1,c6:ad:93:9d:98:1b Lease:0x64c7932c}
	I0731 04:03:15.078956    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:22:a:47:ac:f4:b8 ID:1,22:a:47:ac:f4:b8 Lease:0x64c8e463}
	I0731 04:03:15.078960    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 04:03:15.078964    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 04:03:15.078969    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 04:03:15.078972    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 04:03:15.078976    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 04:03:15.078980    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 04:03:15.078984    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 04:03:15.078988    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 04:03:15.078992    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 04:03:15.078996    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 04:03:17.080968    6083 main.go:141] libmachine: Attempt 4
	I0731 04:03:17.080974    6083 main.go:141] libmachine: Searching for 1a:6:2e:b:f5:24 in /var/db/dhcpd_leases ...
	I0731 04:03:17.081019    6083 main.go:141] libmachine: Found 13 entries in /var/db/dhcpd_leases!
	I0731 04:03:17.081025    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:82:6b:f6:39:9a:25 ID:1,82:6b:f6:39:9a:25 Lease:0x64c8e4b8}
	I0731 04:03:17.081029    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:c6:ad:93:9d:98:1b ID:1,c6:ad:93:9d:98:1b Lease:0x64c7932c}
	I0731 04:03:17.081039    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:22:a:47:ac:f4:b8 ID:1,22:a:47:ac:f4:b8 Lease:0x64c8e463}
	I0731 04:03:17.081044    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 04:03:17.081048    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 04:03:17.081052    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 04:03:17.081055    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 04:03:17.081059    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 04:03:17.081064    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 04:03:17.081068    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 04:03:17.081081    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 04:03:17.081085    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 04:03:17.081089    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 04:03:19.082311    6083 main.go:141] libmachine: Attempt 5
	I0731 04:03:19.082323    6083 main.go:141] libmachine: Searching for 1a:6:2e:b:f5:24 in /var/db/dhcpd_leases ...
	I0731 04:03:19.082421    6083 main.go:141] libmachine: Found 13 entries in /var/db/dhcpd_leases!
	I0731 04:03:19.082429    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:82:6b:f6:39:9a:25 ID:1,82:6b:f6:39:9a:25 Lease:0x64c8e4b8}
	I0731 04:03:19.082433    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:c6:ad:93:9d:98:1b ID:1,c6:ad:93:9d:98:1b Lease:0x64c7932c}
	I0731 04:03:19.082437    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:22:a:47:ac:f4:b8 ID:1,22:a:47:ac:f4:b8 Lease:0x64c8e463}
	I0731 04:03:19.082444    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 04:03:19.082453    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 04:03:19.082457    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 04:03:19.082461    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 04:03:19.082465    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 04:03:19.082469    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 04:03:19.082473    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 04:03:19.082477    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 04:03:19.082481    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 04:03:19.082485    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 04:03:21.084502    6083 main.go:141] libmachine: Attempt 6
	I0731 04:03:21.084519    6083 main.go:141] libmachine: Searching for 1a:6:2e:b:f5:24 in /var/db/dhcpd_leases ...
	I0731 04:03:21.084666    6083 main.go:141] libmachine: Found 14 entries in /var/db/dhcpd_leases!
	I0731 04:03:21.084677    6083 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:1a:6:2e:b:f5:24 ID:1,1a:6:2e:b:f5:24 Lease:0x64c8e678}
	I0731 04:03:21.084680    6083 main.go:141] libmachine: Found match: 1a:6:2e:b:f5:24
	I0731 04:03:21.084689    6083 main.go:141] libmachine: IP: 192.168.105.15
	I0731 04:03:21.084694    6083 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.15)...
	I0731 04:03:22.090153    6083 machine.go:88] provisioning docker machine ...
	I0731 04:03:22.090170    6083 buildroot.go:166] provisioning hostname "image-484000"
	I0731 04:03:22.090218    6083 main.go:141] libmachine: Using SSH client type: native
	I0731 04:03:22.090465    6083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010f5170] 0x1010f7bd0 <nil>  [] 0s} 192.168.105.15 22 <nil> <nil>}
	I0731 04:03:22.090469    6083 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-484000 && echo "image-484000" | sudo tee /etc/hostname
	I0731 04:03:22.157392    6083 main.go:141] libmachine: SSH cmd err, output: <nil>: image-484000
	
	I0731 04:03:22.157459    6083 main.go:141] libmachine: Using SSH client type: native
	I0731 04:03:22.157732    6083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010f5170] 0x1010f7bd0 <nil>  [] 0s} 192.168.105.15 22 <nil> <nil>}
	I0731 04:03:22.157738    6083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-484000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-484000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-484000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 04:03:22.223379    6083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 04:03:22.223387    6083 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16968-4815/.minikube CaCertPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16968-4815/.minikube}
	I0731 04:03:22.223398    6083 buildroot.go:174] setting up certificates
	I0731 04:03:22.223404    6083 provision.go:83] configureAuth start
	I0731 04:03:22.223408    6083 provision.go:138] copyHostCerts
	I0731 04:03:22.223496    6083 exec_runner.go:144] found /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.pem, removing ...
	I0731 04:03:22.223500    6083 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.pem
	I0731 04:03:22.223589    6083 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.pem (1078 bytes)
	I0731 04:03:22.223775    6083 exec_runner.go:144] found /Users/jenkins/minikube-integration/16968-4815/.minikube/cert.pem, removing ...
	I0731 04:03:22.223776    6083 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16968-4815/.minikube/cert.pem
	I0731 04:03:22.223815    6083 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16968-4815/.minikube/cert.pem (1123 bytes)
	I0731 04:03:22.223918    6083 exec_runner.go:144] found /Users/jenkins/minikube-integration/16968-4815/.minikube/key.pem, removing ...
	I0731 04:03:22.223919    6083 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16968-4815/.minikube/key.pem
	I0731 04:03:22.223955    6083 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16968-4815/.minikube/key.pem (1675 bytes)
	I0731 04:03:22.224028    6083 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca-key.pem org=jenkins.image-484000 san=[192.168.105.15 192.168.105.15 localhost 127.0.0.1 minikube image-484000]
	I0731 04:03:22.307408    6083 provision.go:172] copyRemoteCerts
	I0731 04:03:22.307447    6083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 04:03:22.307453    6083 sshutil.go:53] new ssh client: &{IP:192.168.105.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/id_rsa Username:docker}
	I0731 04:03:22.342842    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0731 04:03:22.350557    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 04:03:22.357535    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 04:03:22.364288    6083 provision.go:86] duration metric: configureAuth took 140.883458ms
	I0731 04:03:22.364294    6083 buildroot.go:189] setting minikube options for container-runtime
	I0731 04:03:22.364396    6083 config.go:182] Loaded profile config "image-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:03:22.364426    6083 main.go:141] libmachine: Using SSH client type: native
	I0731 04:03:22.364647    6083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010f5170] 0x1010f7bd0 <nil>  [] 0s} 192.168.105.15 22 <nil> <nil>}
	I0731 04:03:22.364650    6083 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 04:03:22.428515    6083 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 04:03:22.428520    6083 buildroot.go:70] root file system type: tmpfs
	I0731 04:03:22.428585    6083 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 04:03:22.428625    6083 main.go:141] libmachine: Using SSH client type: native
	I0731 04:03:22.428875    6083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010f5170] 0x1010f7bd0 <nil>  [] 0s} 192.168.105.15 22 <nil> <nil>}
	I0731 04:03:22.428916    6083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 04:03:22.497162    6083 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 04:03:22.497202    6083 main.go:141] libmachine: Using SSH client type: native
	I0731 04:03:22.497451    6083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010f5170] 0x1010f7bd0 <nil>  [] 0s} 192.168.105.15 22 <nil> <nil>}
	I0731 04:03:22.497459    6083 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 04:03:22.858366    6083 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 04:03:22.858375    6083 machine.go:91] provisioned docker machine in 768.233334ms
	I0731 04:03:22.858380    6083 client.go:171] LocalClient.Create took 14.147830667s
	I0731 04:03:22.858387    6083 start.go:167] duration metric: libmachine.API.Create for "image-484000" took 14.14785925s
	I0731 04:03:22.858390    6083 start.go:300] post-start starting for "image-484000" (driver="qemu2")
	I0731 04:03:22.858394    6083 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 04:03:22.858467    6083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 04:03:22.858475    6083 sshutil.go:53] new ssh client: &{IP:192.168.105.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/id_rsa Username:docker}
	I0731 04:03:22.894555    6083 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 04:03:22.895809    6083 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 04:03:22.895816    6083 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16968-4815/.minikube/addons for local assets ...
	I0731 04:03:22.895881    6083 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16968-4815/.minikube/files for local assets ...
	I0731 04:03:22.895989    6083 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem -> 52232.pem in /etc/ssl/certs
	I0731 04:03:22.896104    6083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 04:03:22.898997    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem --> /etc/ssl/certs/52232.pem (1708 bytes)
	I0731 04:03:22.905802    6083 start.go:303] post-start completed in 47.409625ms
	I0731 04:03:22.906171    6083 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/config.json ...
	I0731 04:03:22.906325    6083 start.go:128] duration metric: createHost completed in 14.224087625s
	I0731 04:03:22.906352    6083 main.go:141] libmachine: Using SSH client type: native
	I0731 04:03:22.906564    6083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010f5170] 0x1010f7bd0 <nil>  [] 0s} 192.168.105.15 22 <nil> <nil>}
	I0731 04:03:22.906566    6083 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 04:03:22.972374    6083 main.go:141] libmachine: SSH cmd err, output: <nil>: 1690801402.490214168
	
	I0731 04:03:22.972378    6083 fix.go:206] guest clock: 1690801402.490214168
	I0731 04:03:22.972381    6083 fix.go:219] Guest: 2023-07-31 04:03:22.490214168 -0700 PDT Remote: 2023-07-31 04:03:22.906326 -0700 PDT m=+14.328455626 (delta=-416.111832ms)
	I0731 04:03:22.972394    6083 fix.go:190] guest clock delta is within tolerance: -416.111832ms
	I0731 04:03:22.972396    6083 start.go:83] releasing machines lock for "image-484000", held for 14.2902045s
	I0731 04:03:22.972716    6083 ssh_runner.go:195] Run: cat /version.json
	I0731 04:03:22.972722    6083 sshutil.go:53] new ssh client: &{IP:192.168.105.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/id_rsa Username:docker}
	I0731 04:03:22.972735    6083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 04:03:22.972754    6083 sshutil.go:53] new ssh client: &{IP:192.168.105.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/id_rsa Username:docker}
	I0731 04:03:23.051043    6083 ssh_runner.go:195] Run: systemctl --version
	I0731 04:03:23.053088    6083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 04:03:23.054951    6083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 04:03:23.054980    6083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 04:03:23.060087    6083 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 04:03:23.060094    6083 start.go:466] detecting cgroup driver to use...
	I0731 04:03:23.060158    6083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 04:03:23.065598    6083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0731 04:03:23.068722    6083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 04:03:23.071794    6083 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 04:03:23.071814    6083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 04:03:23.075224    6083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 04:03:23.078560    6083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 04:03:23.081717    6083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 04:03:23.084710    6083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 04:03:23.087779    6083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 04:03:23.091498    6083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 04:03:23.094779    6083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 04:03:23.097824    6083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 04:03:23.171645    6083 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 04:03:23.181778    6083 start.go:466] detecting cgroup driver to use...
	I0731 04:03:23.181826    6083 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 04:03:23.187202    6083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 04:03:23.192315    6083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 04:03:23.198395    6083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 04:03:23.202973    6083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 04:03:23.207608    6083 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 04:03:23.255781    6083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 04:03:23.261081    6083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 04:03:23.266724    6083 ssh_runner.go:195] Run: which cri-dockerd
	I0731 04:03:23.268097    6083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 04:03:23.271114    6083 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 04:03:23.276173    6083 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 04:03:23.357059    6083 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 04:03:23.434053    6083 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 04:03:23.434061    6083 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0731 04:03:23.439484    6083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 04:03:23.515533    6083 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 04:03:24.679738    6083 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.164218458s)
	I0731 04:03:24.679789    6083 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 04:03:24.749550    6083 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 04:03:24.825119    6083 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 04:03:24.905120    6083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 04:03:24.980782    6083 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 04:03:24.987643    6083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 04:03:25.085587    6083 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0731 04:03:25.109102    6083 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 04:03:25.109188    6083 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 04:03:25.111329    6083 start.go:534] Will wait 60s for crictl version
	I0731 04:03:25.111359    6083 ssh_runner.go:195] Run: which crictl
	I0731 04:03:25.112951    6083 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 04:03:25.132768    6083 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0731 04:03:25.132841    6083 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 04:03:25.142471    6083 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 04:03:25.155503    6083 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0731 04:03:25.155587    6083 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0731 04:03:25.156975    6083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 04:03:25.161538    6083 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:03:25.161586    6083 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 04:03:25.166956    6083 docker.go:636] Got preloaded images: 
	I0731 04:03:25.166960    6083 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0731 04:03:25.166993    6083 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 04:03:25.169918    6083 ssh_runner.go:195] Run: which lz4
	I0731 04:03:25.171225    6083 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 04:03:25.172471    6083 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 04:03:25.172482    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0731 04:03:26.416769    6083 docker.go:600] Took 1.245614 seconds to copy over tarball
	I0731 04:03:26.416814    6083 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 04:03:27.441204    6083 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.024402292s)
	I0731 04:03:27.441212    6083 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 04:03:27.456165    6083 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 04:03:27.459426    6083 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0731 04:03:27.464456    6083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 04:03:27.541841    6083 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 04:03:28.989513    6083 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.447691583s)
	I0731 04:03:28.989605    6083 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 04:03:28.995782    6083 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 04:03:28.995789    6083 cache_images.go:84] Images are preloaded, skipping loading
	I0731 04:03:28.995836    6083 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 04:03:29.003768    6083 cni.go:84] Creating CNI manager for ""
	I0731 04:03:29.003774    6083 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:03:29.003784    6083 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 04:03:29.003792    6083 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.15 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-484000 NodeName:image-484000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 04:03:29.003854    6083 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-484000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 04:03:29.003882    6083 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-484000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:image-484000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 04:03:29.003931    6083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0731 04:03:29.007373    6083 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 04:03:29.007395    6083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 04:03:29.010454    6083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0731 04:03:29.015298    6083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 04:03:29.020195    6083 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0731 04:03:29.025281    6083 ssh_runner.go:195] Run: grep 192.168.105.15	control-plane.minikube.internal$ /etc/hosts
	I0731 04:03:29.026544    6083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 04:03:29.030189    6083 certs.go:56] Setting up /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000 for IP: 192.168.105.15
	I0731 04:03:29.030198    6083 certs.go:190] acquiring lock for shared ca certs: {Name:mk645bb5ce6691935288c693436a38a3c4bde2b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:03:29.030327    6083 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.key
	I0731 04:03:29.030363    6083 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.key
	I0731 04:03:29.030386    6083 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/client.key
	I0731 04:03:29.030392    6083 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/client.crt with IP's: []
	I0731 04:03:29.093062    6083 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/client.crt ...
	I0731 04:03:29.093066    6083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/client.crt: {Name:mk8b1a4cd7ed3b4f2bfc5379f1c46052308685e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:03:29.093251    6083 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/client.key ...
	I0731 04:03:29.093253    6083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/client.key: {Name:mkba6b8e75de22bc4ab987f8b2f431b38d3ed675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:03:29.093358    6083 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/apiserver.key.8abb5781
	I0731 04:03:29.093363    6083 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/apiserver.crt.8abb5781 with IP's: [192.168.105.15 10.96.0.1 127.0.0.1 10.0.0.1]
	I0731 04:03:29.271935    6083 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/apiserver.crt.8abb5781 ...
	I0731 04:03:29.271940    6083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/apiserver.crt.8abb5781: {Name:mkd214813c7842d2474c87115980d56ee181d51c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:03:29.272140    6083 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/apiserver.key.8abb5781 ...
	I0731 04:03:29.272143    6083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/apiserver.key.8abb5781: {Name:mk596ac3e2d0450720f0bf4246a252997fba4d8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:03:29.272254    6083 certs.go:337] copying /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/apiserver.crt.8abb5781 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/apiserver.crt
	I0731 04:03:29.272341    6083 certs.go:341] copying /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/apiserver.key.8abb5781 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/apiserver.key
	I0731 04:03:29.272417    6083 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/proxy-client.key
	I0731 04:03:29.272421    6083 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/proxy-client.crt with IP's: []
	I0731 04:03:29.313494    6083 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/proxy-client.crt ...
	I0731 04:03:29.313496    6083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/proxy-client.crt: {Name:mkb77e92cb32be38fff7e644123a7b095b603c88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:03:29.313602    6083 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/proxy-client.key ...
	I0731 04:03:29.313603    6083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/proxy-client.key: {Name:mk7c5bc93cd8d089edaac2ca774b20ed012b279e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:03:29.313822    6083 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/5223.pem (1338 bytes)
	W0731 04:03:29.313845    6083 certs.go:433] ignoring /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/5223_empty.pem, impossibly tiny 0 bytes
	I0731 04:03:29.313850    6083 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 04:03:29.313867    6083 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem (1078 bytes)
	I0731 04:03:29.313883    6083 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem (1123 bytes)
	I0731 04:03:29.313898    6083 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/key.pem (1675 bytes)
	I0731 04:03:29.313935    6083 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem (1708 bytes)
	I0731 04:03:29.314213    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 04:03:29.321488    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 04:03:29.327884    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 04:03:29.335099    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/image-484000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 04:03:29.342303    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 04:03:29.349199    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 04:03:29.355739    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 04:03:29.362869    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 04:03:29.369761    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 04:03:29.376213    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/5223.pem --> /usr/share/ca-certificates/5223.pem (1338 bytes)
	I0731 04:03:29.383102    6083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem --> /usr/share/ca-certificates/52232.pem (1708 bytes)
	I0731 04:03:29.389959    6083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 04:03:29.394927    6083 ssh_runner.go:195] Run: openssl version
	I0731 04:03:29.396830    6083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 04:03:29.399971    6083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 04:03:29.401438    6083 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 10:54 /usr/share/ca-certificates/minikubeCA.pem
	I0731 04:03:29.401455    6083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 04:03:29.403192    6083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 04:03:29.406515    6083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5223.pem && ln -fs /usr/share/ca-certificates/5223.pem /etc/ssl/certs/5223.pem"
	I0731 04:03:29.409397    6083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5223.pem
	I0731 04:03:29.410804    6083 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 31 10:55 /usr/share/ca-certificates/5223.pem
	I0731 04:03:29.410820    6083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5223.pem
	I0731 04:03:29.412617    6083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5223.pem /etc/ssl/certs/51391683.0"
	I0731 04:03:29.415765    6083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/52232.pem && ln -fs /usr/share/ca-certificates/52232.pem /etc/ssl/certs/52232.pem"
	I0731 04:03:29.419093    6083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/52232.pem
	I0731 04:03:29.420434    6083 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 31 10:55 /usr/share/ca-certificates/52232.pem
	I0731 04:03:29.420454    6083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/52232.pem
	I0731 04:03:29.422188    6083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/52232.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 04:03:29.425026    6083 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 04:03:29.426319    6083 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 04:03:29.426347    6083 kubeadm.go:404] StartCluster: {Name:image-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:image-484000 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.15 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:03:29.426407    6083 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 04:03:29.431579    6083 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 04:03:29.434965    6083 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 04:03:29.437758    6083 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 04:03:29.440402    6083 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 04:03:29.440413    6083 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 04:03:29.463323    6083 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0731 04:03:29.463354    6083 kubeadm.go:322] [preflight] Running pre-flight checks
	I0731 04:03:29.522511    6083 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 04:03:29.522567    6083 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 04:03:29.522619    6083 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 04:03:29.584981    6083 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 04:03:29.595772    6083 out.go:204]   - Generating certificates and keys ...
	I0731 04:03:29.595805    6083 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0731 04:03:29.595835    6083 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0731 04:03:29.663834    6083 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 04:03:29.743313    6083 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0731 04:03:29.805406    6083 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0731 04:03:29.959788    6083 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0731 04:03:30.019032    6083 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0731 04:03:30.019086    6083 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-484000 localhost] and IPs [192.168.105.15 127.0.0.1 ::1]
	I0731 04:03:30.182109    6083 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0731 04:03:30.182164    6083 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-484000 localhost] and IPs [192.168.105.15 127.0.0.1 ::1]
	I0731 04:03:30.307477    6083 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 04:03:30.368385    6083 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 04:03:30.461200    6083 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0731 04:03:30.461225    6083 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 04:03:30.567593    6083 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 04:03:30.598980    6083 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 04:03:30.678605    6083 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 04:03:30.810282    6083 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 04:03:30.816783    6083 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 04:03:30.817168    6083 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 04:03:30.817188    6083 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0731 04:03:30.902110    6083 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 04:03:30.905769    6083 out.go:204]   - Booting up control plane ...
	I0731 04:03:30.905850    6083 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 04:03:30.905886    6083 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 04:03:30.905932    6083 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 04:03:30.907393    6083 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 04:03:30.909405    6083 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 04:03:34.912271    6083 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002426 seconds
	I0731 04:03:34.912453    6083 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 04:03:34.926955    6083 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 04:03:35.438864    6083 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 04:03:35.438958    6083 kubeadm.go:322] [mark-control-plane] Marking the node image-484000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 04:03:35.950951    6083 kubeadm.go:322] [bootstrap-token] Using token: s24unr.8lh6krax2le4sjdf
	I0731 04:03:35.957989    6083 out.go:204]   - Configuring RBAC rules ...
	I0731 04:03:35.958149    6083 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 04:03:35.958236    6083 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 04:03:35.961317    6083 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 04:03:35.963289    6083 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 04:03:35.965155    6083 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 04:03:35.967754    6083 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 04:03:35.973911    6083 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 04:03:36.146088    6083 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0731 04:03:36.359308    6083 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0731 04:03:36.359737    6083 kubeadm.go:322] 
	I0731 04:03:36.359786    6083 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0731 04:03:36.359788    6083 kubeadm.go:322] 
	I0731 04:03:36.359831    6083 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0731 04:03:36.359833    6083 kubeadm.go:322] 
	I0731 04:03:36.359856    6083 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0731 04:03:36.359887    6083 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 04:03:36.359911    6083 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 04:03:36.359913    6083 kubeadm.go:322] 
	I0731 04:03:36.359949    6083 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0731 04:03:36.359952    6083 kubeadm.go:322] 
	I0731 04:03:36.359987    6083 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 04:03:36.359989    6083 kubeadm.go:322] 
	I0731 04:03:36.360020    6083 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0731 04:03:36.360059    6083 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 04:03:36.360094    6083 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 04:03:36.360096    6083 kubeadm.go:322] 
	I0731 04:03:36.360156    6083 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 04:03:36.360205    6083 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0731 04:03:36.360207    6083 kubeadm.go:322] 
	I0731 04:03:36.360272    6083 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token s24unr.8lh6krax2le4sjdf \
	I0731 04:03:36.360338    6083 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:92b0556b3b6f92a8481720bc11b5f636fd40e93120aabd046ff70f77047ec2aa \
	I0731 04:03:36.360352    6083 kubeadm.go:322] 	--control-plane 
	I0731 04:03:36.360354    6083 kubeadm.go:322] 
	I0731 04:03:36.360407    6083 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0731 04:03:36.360409    6083 kubeadm.go:322] 
	I0731 04:03:36.360463    6083 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token s24unr.8lh6krax2le4sjdf \
	I0731 04:03:36.360526    6083 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:92b0556b3b6f92a8481720bc11b5f636fd40e93120aabd046ff70f77047ec2aa 
	I0731 04:03:36.360589    6083 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 04:03:36.360596    6083 cni.go:84] Creating CNI manager for ""
	I0731 04:03:36.360603    6083 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:03:36.363206    6083 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 04:03:36.371107    6083 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 04:03:36.375082    6083 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0731 04:03:36.379941    6083 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 04:03:36.379988    6083 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:03:36.380003    6083 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35 minikube.k8s.io/name=image-484000 minikube.k8s.io/updated_at=2023_07_31T04_03_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:03:36.439731    6083 ops.go:34] apiserver oom_adj: -16
	I0731 04:03:36.439746    6083 kubeadm.go:1081] duration metric: took 59.800417ms to wait for elevateKubeSystemPrivileges.
	I0731 04:03:36.439750    6083 kubeadm.go:406] StartCluster complete in 7.013563959s
	I0731 04:03:36.439758    6083 settings.go:142] acquiring lock: {Name:mk7e2067b9c26be8d46dc95ba3a8a7ad946cadb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:03:36.439846    6083 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:03:36.440163    6083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/kubeconfig: {Name:mk98971837606256b8bab3d325e05dbfd512b496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:03:36.440329    6083 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 04:03:36.440370    6083 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0731 04:03:36.440405    6083 addons.go:69] Setting storage-provisioner=true in profile "image-484000"
	I0731 04:03:36.440411    6083 addons.go:231] Setting addon storage-provisioner=true in "image-484000"
	I0731 04:03:36.440422    6083 addons.go:69] Setting default-storageclass=true in profile "image-484000"
	I0731 04:03:36.440429    6083 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-484000"
	I0731 04:03:36.440435    6083 host.go:66] Checking if "image-484000" exists ...
	I0731 04:03:36.440514    6083 config.go:182] Loaded profile config "image-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:03:36.446205    6083 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 04:03:36.449186    6083 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 04:03:36.449190    6083 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 04:03:36.449197    6083 sshutil.go:53] new ssh client: &{IP:192.168.105.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/id_rsa Username:docker}
	I0731 04:03:36.453744    6083 addons.go:231] Setting addon default-storageclass=true in "image-484000"
	I0731 04:03:36.453759    6083 host.go:66] Checking if "image-484000" exists ...
	I0731 04:03:36.454437    6083 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 04:03:36.454440    6083 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 04:03:36.454446    6083 sshutil.go:53] new ssh client: &{IP:192.168.105.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/image-484000/id_rsa Username:docker}
	I0731 04:03:36.457410    6083 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-484000" context rescaled to 1 replicas
	I0731 04:03:36.457422    6083 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.15 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:03:36.461228    6083 out.go:177] * Verifying Kubernetes components...
	I0731 04:03:36.469214    6083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 04:03:36.484431    6083 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 04:03:36.484800    6083 api_server.go:52] waiting for apiserver process to appear ...
	I0731 04:03:36.484829    6083 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:03:36.503370    6083 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 04:03:36.549712    6083 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 04:03:36.915520    6083 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0731 04:03:36.915530    6083 api_server.go:72] duration metric: took 458.109458ms to wait for apiserver process to appear ...
	I0731 04:03:36.915535    6083 api_server.go:88] waiting for apiserver healthz status ...
	I0731 04:03:36.915548    6083 api_server.go:253] Checking apiserver healthz at https://192.168.105.15:8443/healthz ...
	I0731 04:03:36.919239    6083 api_server.go:279] https://192.168.105.15:8443/healthz returned 200:
	ok
	I0731 04:03:36.919977    6083 api_server.go:141] control plane version: v1.27.3
	I0731 04:03:36.919981    6083 api_server.go:131] duration metric: took 4.444792ms to wait for apiserver health ...
	I0731 04:03:36.919984    6083 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 04:03:36.922725    6083 system_pods.go:59] 4 kube-system pods found
	I0731 04:03:36.922733    6083 system_pods.go:61] "etcd-image-484000" [d6b96531-4b75-461e-bbda-51dd25f18c37] Pending
	I0731 04:03:36.922736    6083 system_pods.go:61] "kube-apiserver-image-484000" [25ed3169-129e-4048-972e-b165ae665ae2] Pending
	I0731 04:03:36.922738    6083 system_pods.go:61] "kube-controller-manager-image-484000" [7ff7c908-9a67-4200-b342-718df3910e44] Pending
	I0731 04:03:36.922739    6083 system_pods.go:61] "kube-scheduler-image-484000" [865a730c-f266-44bd-a424-11c6f98eb8dd] Pending
	I0731 04:03:36.922741    6083 system_pods.go:74] duration metric: took 2.755708ms to wait for pod list to return data ...
	I0731 04:03:36.922744    6083 kubeadm.go:581] duration metric: took 465.324916ms to wait for : map[apiserver:true system_pods:true] ...
	I0731 04:03:36.922749    6083 node_conditions.go:102] verifying NodePressure condition ...
	I0731 04:03:36.923927    6083 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0731 04:03:36.923938    6083 node_conditions.go:123] node cpu capacity is 2
	I0731 04:03:36.923942    6083 node_conditions.go:105] duration metric: took 1.191875ms to run NodePressure ...
	I0731 04:03:36.923946    6083 start.go:228] waiting for startup goroutines ...
	I0731 04:03:36.988211    6083 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 04:03:36.995200    6083 addons.go:502] enable addons completed in 554.838917ms: enabled=[storage-provisioner default-storageclass]
	I0731 04:03:36.995214    6083 start.go:233] waiting for cluster config update ...
	I0731 04:03:36.995220    6083 start.go:242] writing updated cluster config ...
	I0731 04:03:36.995570    6083 ssh_runner.go:195] Run: rm -f paused
	I0731 04:03:37.023877    6083 start.go:596] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0731 04:03:37.027247    6083 out.go:177] * Done! kubectl is now configured to use "image-484000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-07-31 11:03:19 UTC, ends at Mon 2023-07-31 11:03:39 UTC. --
	Jul 31 11:03:31 image-484000 dockerd[1107]: time="2023-07-31T11:03:31.451022797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 11:03:31 image-484000 dockerd[1107]: time="2023-07-31T11:03:31.451058006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:03:31 image-484000 dockerd[1107]: time="2023-07-31T11:03:31.451148881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 11:03:31 image-484000 dockerd[1107]: time="2023-07-31T11:03:31.451159547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:03:31 image-484000 cri-dockerd[999]: time="2023-07-31T11:03:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1424a24786255bba4d0ed5578cd187582282d749bacf89191a3260957d9a5ef4/resolv.conf as [nameserver 192.168.105.1]"
	Jul 31 11:03:31 image-484000 dockerd[1107]: time="2023-07-31T11:03:31.468401381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 11:03:31 image-484000 dockerd[1107]: time="2023-07-31T11:03:31.470029006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:03:31 image-484000 dockerd[1107]: time="2023-07-31T11:03:31.470043297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 11:03:31 image-484000 dockerd[1107]: time="2023-07-31T11:03:31.470052297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:03:31 image-484000 dockerd[1107]: time="2023-07-31T11:03:31.492764714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 11:03:31 image-484000 dockerd[1107]: time="2023-07-31T11:03:31.492876422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:03:31 image-484000 dockerd[1107]: time="2023-07-31T11:03:31.492901922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 11:03:31 image-484000 dockerd[1107]: time="2023-07-31T11:03:31.492923089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:03:38 image-484000 dockerd[1101]: time="2023-07-31T11:03:38.658836592Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 31 11:03:38 image-484000 dockerd[1101]: time="2023-07-31T11:03:38.782992884Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 31 11:03:38 image-484000 dockerd[1101]: time="2023-07-31T11:03:38.798443384Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 31 11:03:38 image-484000 dockerd[1107]: time="2023-07-31T11:03:38.825061218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 11:03:38 image-484000 dockerd[1107]: time="2023-07-31T11:03:38.825147301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:03:38 image-484000 dockerd[1107]: time="2023-07-31T11:03:38.825159759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 11:03:38 image-484000 dockerd[1107]: time="2023-07-31T11:03:38.825166093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:03:39 image-484000 dockerd[1101]: time="2023-07-31T11:03:39.481094285Z" level=info msg="ignoring event" container=87fbc254ad64b6fe245651f8d5734770c263f82d9074db7e20b81581387a4396 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 11:03:39 image-484000 dockerd[1107]: time="2023-07-31T11:03:39.481068952Z" level=info msg="shim disconnected" id=87fbc254ad64b6fe245651f8d5734770c263f82d9074db7e20b81581387a4396 namespace=moby
	Jul 31 11:03:39 image-484000 dockerd[1107]: time="2023-07-31T11:03:39.481217118Z" level=warning msg="cleaning up after shim disconnected" id=87fbc254ad64b6fe245651f8d5734770c263f82d9074db7e20b81581387a4396 namespace=moby
	Jul 31 11:03:39 image-484000 dockerd[1107]: time="2023-07-31T11:03:39.481222368Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 11:03:39 image-484000 dockerd[1107]: time="2023-07-31T11:03:39.485058327Z" level=warning msg="cleanup warnings time=\"2023-07-31T11:03:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	fb5fd4c1bfaa7       bcb9e554eaab6       8 seconds ago       Running             kube-scheduler            0                   1424a24786255
	fd25bff18cf6d       39dfb036b0986       8 seconds ago       Running             kube-apiserver            0                   ee7601f03a866
	087c03394d0a5       ab3683b584ae5       8 seconds ago       Running             kube-controller-manager   0                   c8deccea97a5f
	6b407d730c407       24bc64e911039       8 seconds ago       Running             etcd                      0                   50f4133aaea2c
	
	* 
	* ==> describe nodes <==
	* Name:               image-484000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-484000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35
	                    minikube.k8s.io/name=image-484000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_31T04_03_36_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 11:03:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-484000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 11:03:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 11:03:35 +0000   Mon, 31 Jul 2023 11:03:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 11:03:35 +0000   Mon, 31 Jul 2023 11:03:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 11:03:35 +0000   Mon, 31 Jul 2023 11:03:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 31 Jul 2023 11:03:35 +0000   Mon, 31 Jul 2023 11:03:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.15
	  Hostname:    image-484000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3a1e871b87e445bb00a219bb2ce15b8
	  System UUID:                e3a1e871b87e445bb00a219bb2ce15b8
	  Boot ID:                    cf6f5154-3138-4f7e-9b5e-2c11477d9cbe
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-484000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5s
	  kube-system                 kube-apiserver-image-484000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-controller-manager-image-484000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-scheduler-image-484000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 5s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  5s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5s    kubelet  Node image-484000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5s    kubelet  Node image-484000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5s    kubelet  Node image-484000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Jul31 11:03] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.633642] EINJ: EINJ table not found.
	[  +0.506725] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043692] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000810] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.116272] systemd-fstab-generator[481]: Ignoring "noauto" for root device
	[  +0.079946] systemd-fstab-generator[493]: Ignoring "noauto" for root device
	[  +0.441378] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.185900] systemd-fstab-generator[706]: Ignoring "noauto" for root device
	[  +0.078181] systemd-fstab-generator[717]: Ignoring "noauto" for root device
	[  +0.081114] systemd-fstab-generator[730]: Ignoring "noauto" for root device
	[  +1.149081] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.082587] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[  +0.077602] systemd-fstab-generator[929]: Ignoring "noauto" for root device
	[  +0.081059] systemd-fstab-generator[940]: Ignoring "noauto" for root device
	[  +0.075524] systemd-fstab-generator[951]: Ignoring "noauto" for root device
	[  +0.104009] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +2.454434] systemd-fstab-generator[1094]: Ignoring "noauto" for root device
	[  +3.356638] systemd-fstab-generator[1425]: Ignoring "noauto" for root device
	[  +0.336949] kauditd_printk_skb: 68 callbacks suppressed
	[  +4.798827] systemd-fstab-generator[2323]: Ignoring "noauto" for root device
	[  +3.297804] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [6b407d730c40] <==
	* {"level":"info","ts":"2023-07-31T11:03:31.809Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-31T11:03:31.809Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-31T11:03:31.809Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-31T11:03:31.811Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.15:2380"}
	{"level":"info","ts":"2023-07-31T11:03:31.811Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.15:2380"}
	{"level":"info","ts":"2023-07-31T11:03:31.813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b9724998fcc608d4 switched to configuration voters=(13362823965786376404)"}
	{"level":"info","ts":"2023-07-31T11:03:31.814Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5071baceae1bbe2f","local-member-id":"b9724998fcc608d4","added-peer-id":"b9724998fcc608d4","added-peer-peer-urls":["https://192.168.105.15:2380"]}
	{"level":"info","ts":"2023-07-31T11:03:32.086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b9724998fcc608d4 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-31T11:03:32.087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b9724998fcc608d4 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-31T11:03:32.087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b9724998fcc608d4 received MsgPreVoteResp from b9724998fcc608d4 at term 1"}
	{"level":"info","ts":"2023-07-31T11:03:32.087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b9724998fcc608d4 became candidate at term 2"}
	{"level":"info","ts":"2023-07-31T11:03:32.087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b9724998fcc608d4 received MsgVoteResp from b9724998fcc608d4 at term 2"}
	{"level":"info","ts":"2023-07-31T11:03:32.087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b9724998fcc608d4 became leader at term 2"}
	{"level":"info","ts":"2023-07-31T11:03:32.087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b9724998fcc608d4 elected leader b9724998fcc608d4 at term 2"}
	{"level":"info","ts":"2023-07-31T11:03:32.093Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b9724998fcc608d4","local-member-attributes":"{Name:image-484000 ClientURLs:[https://192.168.105.15:2379]}","request-path":"/0/members/b9724998fcc608d4/attributes","cluster-id":"5071baceae1bbe2f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-31T11:03:32.093Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T11:03:32.093Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.15:2379"}
	{"level":"info","ts":"2023-07-31T11:03:32.093Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T11:03:32.093Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T11:03:32.095Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-31T11:03:32.095Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5071baceae1bbe2f","local-member-id":"b9724998fcc608d4","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T11:03:32.095Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T11:03:32.095Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T11:03:32.093Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-31T11:03:32.095Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  11:03:40 up 0 min,  0 users,  load average: 0.28, 0.06, 0.02
	Linux image-484000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [fd25bff18cf6] <==
	* I0731 11:03:32.915132       1 shared_informer.go:318] Caches are synced for configmaps
	I0731 11:03:32.915277       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 11:03:32.915546       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0731 11:03:32.917194       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0731 11:03:32.917222       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0731 11:03:32.918976       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0731 11:03:32.919083       1 aggregator.go:152] initial CRD sync complete...
	I0731 11:03:32.919112       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 11:03:32.919128       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 11:03:32.919145       1 cache.go:39] Caches are synced for autoregister controller
	I0731 11:03:32.939035       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 11:03:33.666802       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 11:03:33.818374       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0731 11:03:33.823136       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0731 11:03:33.823156       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 11:03:33.969574       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 11:03:33.980053       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 11:03:34.060641       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0731 11:03:34.062962       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.15]
	I0731 11:03:34.063407       1 controller.go:624] quota admission added evaluator for: endpoints
	I0731 11:03:34.064835       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 11:03:34.873348       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0731 11:03:35.659241       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0731 11:03:35.663329       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0731 11:03:35.668049       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [087c03394d0a] <==
	* I0731 11:03:34.885625       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0731 11:03:34.889593       1 controllermanager.go:638] "Started controller" controller="pv-protection"
	I0731 11:03:34.889654       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0731 11:03:34.889664       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0731 11:03:34.892876       1 controllermanager.go:638] "Started controller" controller="ttl"
	I0731 11:03:34.892959       1 ttl_controller.go:124] "Starting TTL controller"
	I0731 11:03:34.892965       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0731 11:03:34.896462       1 controllermanager.go:638] "Started controller" controller="persistentvolume-binder"
	I0731 11:03:34.896538       1 pv_controller_base.go:323] "Starting persistent volume controller"
	I0731 11:03:34.896563       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0731 11:03:34.899796       1 controllermanager.go:638] "Started controller" controller="pvc-protection"
	I0731 11:03:34.899850       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0731 11:03:34.899875       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0731 11:03:34.902978       1 controllermanager.go:638] "Started controller" controller="replicationcontroller"
	I0731 11:03:34.903341       1 replica_set.go:201] "Starting controller" name="replicationcontroller"
	I0731 11:03:34.906436       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0731 11:03:34.909540       1 controllermanager.go:638] "Started controller" controller="podgc"
	I0731 11:03:34.909644       1 gc_controller.go:103] Starting GC controller
	I0731 11:03:34.910062       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0731 11:03:34.966128       1 shared_informer.go:318] Caches are synced for tokens
	I0731 11:03:35.017564       1 controllermanager.go:638] "Started controller" controller="horizontalpodautoscaling"
	I0731 11:03:35.017595       1 horizontal.go:200] "Starting HPA controller"
	I0731 11:03:35.017599       1 shared_informer.go:311] Waiting for caches to sync for HPA
	E0731 11:03:35.168116       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0731 11:03:35.168133       1 controllermanager.go:616] "Warning: skipping controller" controller="service"
	
	* 
	* ==> kube-scheduler [fb5fd4c1bfaa] <==
	* W0731 11:03:32.876114       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 11:03:32.876133       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 11:03:32.876228       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 11:03:32.876260       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 11:03:32.876285       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 11:03:32.876304       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 11:03:32.876344       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 11:03:32.876364       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 11:03:32.876394       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 11:03:32.876507       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 11:03:32.876436       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 11:03:32.876545       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 11:03:32.876450       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 11:03:32.876586       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 11:03:32.876460       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 11:03:32.876619       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 11:03:32.876473       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 11:03:32.876650       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 11:03:32.876702       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 11:03:32.876723       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 11:03:33.726360       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 11:03:33.726390       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 11:03:33.839681       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 11:03:33.839705       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0731 11:03:36.468890       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-31 11:03:19 UTC, ends at Mon 2023-07-31 11:03:40 UTC. --
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.793443    2342 topology_manager.go:212] "Topology Admit Handler"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.793459    2342 topology_manager.go:212] "Topology Admit Handler"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.793471    2342 topology_manager.go:212] "Topology Admit Handler"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.800106    2342 kubelet_node_status.go:70] "Attempting to register node" node="image-484000"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.803447    2342 kubelet_node_status.go:108] "Node was previously registered" node="image-484000"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.803482    2342 kubelet_node_status.go:73] "Successfully registered node" node="image-484000"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.892835    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65ad873a6822b67fec5b0df35a6d9439-k8s-certs\") pod \"kube-apiserver-image-484000\" (UID: \"65ad873a6822b67fec5b0df35a6d9439\") " pod="kube-system/kube-apiserver-image-484000"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.892897    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3eeaa45cbf1184aadbfb9d8f7c2dbc05-flexvolume-dir\") pod \"kube-controller-manager-image-484000\" (UID: \"3eeaa45cbf1184aadbfb9d8f7c2dbc05\") " pod="kube-system/kube-controller-manager-image-484000"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.892913    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3eeaa45cbf1184aadbfb9d8f7c2dbc05-kubeconfig\") pod \"kube-controller-manager-image-484000\" (UID: \"3eeaa45cbf1184aadbfb9d8f7c2dbc05\") " pod="kube-system/kube-controller-manager-image-484000"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.892924    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fd247c48c4abbd083c4f4de0b35aa24-kubeconfig\") pod \"kube-scheduler-image-484000\" (UID: \"0fd247c48c4abbd083c4f4de0b35aa24\") " pod="kube-system/kube-scheduler-image-484000"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.892934    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/cab5d72288db48b6b8c3dedd6c735822-etcd-data\") pod \"etcd-image-484000\" (UID: \"cab5d72288db48b6b8c3dedd6c735822\") " pod="kube-system/etcd-image-484000"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.892969    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/cab5d72288db48b6b8c3dedd6c735822-etcd-certs\") pod \"etcd-image-484000\" (UID: \"cab5d72288db48b6b8c3dedd6c735822\") " pod="kube-system/etcd-image-484000"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.892982    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65ad873a6822b67fec5b0df35a6d9439-ca-certs\") pod \"kube-apiserver-image-484000\" (UID: \"65ad873a6822b67fec5b0df35a6d9439\") " pod="kube-system/kube-apiserver-image-484000"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.892992    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65ad873a6822b67fec5b0df35a6d9439-usr-share-ca-certificates\") pod \"kube-apiserver-image-484000\" (UID: \"65ad873a6822b67fec5b0df35a6d9439\") " pod="kube-system/kube-apiserver-image-484000"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.893001    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3eeaa45cbf1184aadbfb9d8f7c2dbc05-ca-certs\") pod \"kube-controller-manager-image-484000\" (UID: \"3eeaa45cbf1184aadbfb9d8f7c2dbc05\") " pod="kube-system/kube-controller-manager-image-484000"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.893010    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3eeaa45cbf1184aadbfb9d8f7c2dbc05-k8s-certs\") pod \"kube-controller-manager-image-484000\" (UID: \"3eeaa45cbf1184aadbfb9d8f7c2dbc05\") " pod="kube-system/kube-controller-manager-image-484000"
	Jul 31 11:03:35 image-484000 kubelet[2342]: I0731 11:03:35.893053    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3eeaa45cbf1184aadbfb9d8f7c2dbc05-usr-share-ca-certificates\") pod \"kube-controller-manager-image-484000\" (UID: \"3eeaa45cbf1184aadbfb9d8f7c2dbc05\") " pod="kube-system/kube-controller-manager-image-484000"
	Jul 31 11:03:36 image-484000 kubelet[2342]: I0731 11:03:36.685101    2342 apiserver.go:52] "Watching apiserver"
	Jul 31 11:03:36 image-484000 kubelet[2342]: I0731 11:03:36.691813    2342 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jul 31 11:03:36 image-484000 kubelet[2342]: I0731 11:03:36.699530    2342 reconciler.go:41] "Reconciler: start to sync state"
	Jul 31 11:03:36 image-484000 kubelet[2342]: I0731 11:03:36.751535    2342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-484000" podStartSLOduration=1.7515130079999999 podCreationTimestamp="2023-07-31 11:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-31 11:03:36.7514388 +0000 UTC m=+1.109576835" watchObservedRunningTime="2023-07-31 11:03:36.751513008 +0000 UTC m=+1.109651001"
	Jul 31 11:03:36 image-484000 kubelet[2342]: I0731 11:03:36.751579    2342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-484000" podStartSLOduration=1.751572133 podCreationTimestamp="2023-07-31 11:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-31 11:03:36.747828008 +0000 UTC m=+1.105966043" watchObservedRunningTime="2023-07-31 11:03:36.751572133 +0000 UTC m=+1.109710168"
	Jul 31 11:03:36 image-484000 kubelet[2342]: I0731 11:03:36.762471    2342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-484000" podStartSLOduration=1.762357342 podCreationTimestamp="2023-07-31 11:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-31 11:03:36.757904008 +0000 UTC m=+1.116042043" watchObservedRunningTime="2023-07-31 11:03:36.762357342 +0000 UTC m=+1.120495376"
	Jul 31 11:03:36 image-484000 kubelet[2342]: I0731 11:03:36.766826    2342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-484000" podStartSLOduration=1.76670355 podCreationTimestamp="2023-07-31 11:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-31 11:03:36.762613633 +0000 UTC m=+1.120751668" watchObservedRunningTime="2023-07-31 11:03:36.76670355 +0000 UTC m=+1.124841585"
	Jul 31 11:03:40 image-484000 kubelet[2342]: I0731 11:03:40.161307    2342 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-484000 -n image-484000
helpers_test.go:261: (dbg) Run:  kubectl --context image-484000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-484000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-484000 describe pod storage-provisioner: exit status 1 (35.552084ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-484000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.16s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (47.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-464000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-464000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.945273292s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-464000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-464000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e89f5a75-3ceb-4ecd-b078-c2614dc98618] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e89f5a75-3ceb-4ecd-b078-c2614dc98618] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.015850958s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-464000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-464000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-464000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.16
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.16: exit status 1 (15.029927125s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.16" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-464000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-464000 addons disable ingress-dns --alsologtostderr -v=1: (4.447235875s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-464000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-464000 addons disable ingress --alsologtostderr -v=1: (7.101368417s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-464000 -n ingress-addon-legacy-464000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-464000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                           Args                           |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-652000 image ls                               | functional-652000           | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	| image   | functional-652000 image load                             | functional-652000           | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|         | /Users/jenkins/workspace/addon-resizer-save.tar          |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-652000 image ls                               | functional-652000           | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	| image   | functional-652000 image save --daemon                    | functional-652000           | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|         | gcr.io/google-containers/addon-resizer:functional-652000 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-652000                                        | functional-652000           | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|         | image ls --format yaml                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-652000                                        | functional-652000           | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|         | image ls --format short                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| ssh     | functional-652000 ssh pgrep                              | functional-652000           | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT |                     |
	|         | buildkitd                                                |                             |         |         |                     |                     |
	| image   | functional-652000                                        | functional-652000           | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|         | image ls --format json                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-652000 image build -t                         | functional-652000           | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|         | localhost/my-image:functional-652000                     |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                         |                             |         |         |                     |                     |
	| image   | functional-652000                                        | functional-652000           | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|         | image ls --format table                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-652000 image ls                               | functional-652000           | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	| delete  | -p functional-652000                                     | functional-652000           | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	| start   | -p image-484000 --driver=qemu2                           | image-484000                | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|         |                                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-484000                | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | -p image-484000                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-484000                | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str                 |                             |         |         |                     |                     |
	|         | --build-opt=no-cache                                     |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p                       |                             |         |         |                     |                     |
	|         | image-484000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-484000                | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | --build-opt=no-cache -p                                  |                             |         |         |                     |                     |
	|         | image-484000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-484000                | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	|         | -f inner/Dockerfile                                      |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-f                            |                             |         |         |                     |                     |
	|         | -p image-484000                                          |                             |         |         |                     |                     |
	| delete  | -p image-484000                                          | image-484000                | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:03 PDT |
	| start   | -p ingress-addon-legacy-464000                           | ingress-addon-legacy-464000 | jenkins | v1.31.1 | 31 Jul 23 04:03 PDT | 31 Jul 23 04:04 PDT |
	|         | --kubernetes-version=v1.18.20                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	|         | --driver=qemu2                                           |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-464000                              | ingress-addon-legacy-464000 | jenkins | v1.31.1 | 31 Jul 23 04:04 PDT | 31 Jul 23 04:05 PDT |
	|         | addons enable ingress                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-464000                              | ingress-addon-legacy-464000 | jenkins | v1.31.1 | 31 Jul 23 04:05 PDT | 31 Jul 23 04:05 PDT |
	|         | addons enable ingress-dns                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-464000                              | ingress-addon-legacy-464000 | jenkins | v1.31.1 | 31 Jul 23 04:05 PDT | 31 Jul 23 04:05 PDT |
	|         | ssh curl -s http://127.0.0.1/                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-464000 ip                           | ingress-addon-legacy-464000 | jenkins | v1.31.1 | 31 Jul 23 04:05 PDT | 31 Jul 23 04:05 PDT |
	| addons  | ingress-addon-legacy-464000                              | ingress-addon-legacy-464000 | jenkins | v1.31.1 | 31 Jul 23 04:05 PDT | 31 Jul 23 04:05 PDT |
	|         | addons disable ingress-dns                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-464000                              | ingress-addon-legacy-464000 | jenkins | v1.31.1 | 31 Jul 23 04:05 PDT | 31 Jul 23 04:05 PDT |
	|         | addons disable ingress                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 04:03:40
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 04:03:40.721502    6137 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:03:40.721622    6137 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:03:40.721625    6137 out.go:309] Setting ErrFile to fd 2...
	I0731 04:03:40.721628    6137 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:03:40.721739    6137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:03:40.722819    6137 out.go:303] Setting JSON to false
	I0731 04:03:40.738854    6137 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9191,"bootTime":1690792229,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:03:40.738907    6137 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:03:40.742980    6137 out.go:177] * [ingress-addon-legacy-464000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:03:40.750944    6137 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:03:40.751004    6137 notify.go:220] Checking for updates...
	I0731 04:03:40.757899    6137 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:03:40.760932    6137 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:03:40.768851    6137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:03:40.775917    6137 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:03:40.782738    6137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:03:40.786012    6137 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:03:40.789897    6137 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:03:40.797960    6137 start.go:298] selected driver: qemu2
	I0731 04:03:40.797965    6137 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:03:40.797973    6137 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:03:40.800281    6137 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:03:40.803869    6137 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:03:40.808039    6137 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:03:40.808057    6137 cni.go:84] Creating CNI manager for ""
	I0731 04:03:40.808063    6137 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 04:03:40.808067    6137 start_flags.go:319] config:
	{Name:ingress-addon-legacy-464000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-464000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentP
ID:0}
	I0731 04:03:40.812723    6137 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:03:40.816886    6137 out.go:177] * Starting control plane node ingress-addon-legacy-464000 in cluster ingress-addon-legacy-464000
	I0731 04:03:40.822647    6137 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0731 04:03:41.008299    6137 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0731 04:03:41.008392    6137 cache.go:57] Caching tarball of preloaded images
	I0731 04:03:41.009260    6137 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0731 04:03:41.016324    6137 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0731 04:03:41.024938    6137 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0731 04:03:41.245338    6137 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0731 04:03:53.397174    6137 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0731 04:03:53.397315    6137 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0731 04:03:54.145066    6137 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0731 04:03:54.145251    6137 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/config.json ...
	I0731 04:03:54.145273    6137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/config.json: {Name:mk9a02fba661d1a761fb0e2a7fc56eb7cbd02534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:03:54.145495    6137 start.go:365] acquiring machines lock for ingress-addon-legacy-464000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:03:54.145566    6137 start.go:369] acquired machines lock for "ingress-addon-legacy-464000" in 64.375µs
	I0731 04:03:54.145577    6137 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterNam
e:ingress-addon-legacy-464000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:03:54.145614    6137 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:03:54.155237    6137 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0731 04:03:54.175941    6137 start.go:159] libmachine.API.Create for "ingress-addon-legacy-464000" (driver="qemu2")
	I0731 04:03:54.175964    6137 client.go:168] LocalClient.Create starting
	I0731 04:03:54.176055    6137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:03:54.176077    6137 main.go:141] libmachine: Decoding PEM data...
	I0731 04:03:54.176090    6137 main.go:141] libmachine: Parsing certificate...
	I0731 04:03:54.176133    6137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:03:54.176148    6137 main.go:141] libmachine: Decoding PEM data...
	I0731 04:03:54.176171    6137 main.go:141] libmachine: Parsing certificate...
	I0731 04:03:54.176471    6137 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:03:54.307199    6137 main.go:141] libmachine: Creating SSH key...
	I0731 04:03:54.389294    6137 main.go:141] libmachine: Creating Disk image...
	I0731 04:03:54.389299    6137 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:03:54.389428    6137 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/disk.qcow2
	I0731 04:03:54.398162    6137 main.go:141] libmachine: STDOUT: 
	I0731 04:03:54.398177    6137 main.go:141] libmachine: STDERR: 
	I0731 04:03:54.398239    6137 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/disk.qcow2 +20000M
	I0731 04:03:54.405321    6137 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:03:54.405337    6137 main.go:141] libmachine: STDERR: 
	I0731 04:03:54.405357    6137 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/disk.qcow2
	I0731 04:03:54.405361    6137 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:03:54.405393    6137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:75:97:3b:72:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/disk.qcow2
	I0731 04:03:54.440075    6137 main.go:141] libmachine: STDOUT: 
	I0731 04:03:54.440094    6137 main.go:141] libmachine: STDERR: 
	I0731 04:03:54.440098    6137 main.go:141] libmachine: Attempt 0
	I0731 04:03:54.440111    6137 main.go:141] libmachine: Searching for 42:75:97:3b:72:ea in /var/db/dhcpd_leases ...
	I0731 04:03:54.440178    6137 main.go:141] libmachine: Found 14 entries in /var/db/dhcpd_leases!
	I0731 04:03:54.440196    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:1a:6:2e:b:f5:24 ID:1,1a:6:2e:b:f5:24 Lease:0x64c8e678}
	I0731 04:03:54.440204    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:82:6b:f6:39:9a:25 ID:1,82:6b:f6:39:9a:25 Lease:0x64c8e4b8}
	I0731 04:03:54.440209    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:c6:ad:93:9d:98:1b ID:1,c6:ad:93:9d:98:1b Lease:0x64c7932c}
	I0731 04:03:54.440215    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:22:a:47:ac:f4:b8 ID:1,22:a:47:ac:f4:b8 Lease:0x64c8e463}
	I0731 04:03:54.440232    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 04:03:54.440238    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 04:03:54.440244    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 04:03:54.440249    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 04:03:54.440256    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 04:03:54.440261    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 04:03:54.440267    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 04:03:54.440272    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 04:03:54.440277    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 04:03:54.440282    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 04:03:56.442385    6137 main.go:141] libmachine: Attempt 1
	I0731 04:03:56.442454    6137 main.go:141] libmachine: Searching for 42:75:97:3b:72:ea in /var/db/dhcpd_leases ...
	I0731 04:03:56.442911    6137 main.go:141] libmachine: Found 14 entries in /var/db/dhcpd_leases!
	I0731 04:03:56.442958    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:1a:6:2e:b:f5:24 ID:1,1a:6:2e:b:f5:24 Lease:0x64c8e678}
	I0731 04:03:56.442988    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:82:6b:f6:39:9a:25 ID:1,82:6b:f6:39:9a:25 Lease:0x64c8e4b8}
	I0731 04:03:56.443017    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:c6:ad:93:9d:98:1b ID:1,c6:ad:93:9d:98:1b Lease:0x64c7932c}
	I0731 04:03:56.443076    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:22:a:47:ac:f4:b8 ID:1,22:a:47:ac:f4:b8 Lease:0x64c8e463}
	I0731 04:03:56.443109    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 04:03:56.443140    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 04:03:56.443170    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 04:03:56.443202    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 04:03:56.443229    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 04:03:56.443258    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 04:03:56.443286    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 04:03:56.443317    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 04:03:56.443344    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 04:03:56.443372    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 04:03:58.445487    6137 main.go:141] libmachine: Attempt 2
	I0731 04:03:58.445511    6137 main.go:141] libmachine: Searching for 42:75:97:3b:72:ea in /var/db/dhcpd_leases ...
	I0731 04:03:58.445624    6137 main.go:141] libmachine: Found 14 entries in /var/db/dhcpd_leases!
	I0731 04:03:58.445638    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:1a:6:2e:b:f5:24 ID:1,1a:6:2e:b:f5:24 Lease:0x64c8e678}
	I0731 04:03:58.445643    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:82:6b:f6:39:9a:25 ID:1,82:6b:f6:39:9a:25 Lease:0x64c8e4b8}
	I0731 04:03:58.445648    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:c6:ad:93:9d:98:1b ID:1,c6:ad:93:9d:98:1b Lease:0x64c7932c}
	I0731 04:03:58.445654    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:22:a:47:ac:f4:b8 ID:1,22:a:47:ac:f4:b8 Lease:0x64c8e463}
	I0731 04:03:58.445659    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 04:03:58.445665    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 04:03:58.445689    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 04:03:58.445697    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 04:03:58.445701    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 04:03:58.445708    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 04:03:58.445715    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 04:03:58.445729    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 04:03:58.445738    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 04:03:58.445744    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 04:04:00.447728    6137 main.go:141] libmachine: Attempt 3
	I0731 04:04:00.447735    6137 main.go:141] libmachine: Searching for 42:75:97:3b:72:ea in /var/db/dhcpd_leases ...
	I0731 04:04:00.447821    6137 main.go:141] libmachine: Found 14 entries in /var/db/dhcpd_leases!
	I0731 04:04:00.447828    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:1a:6:2e:b:f5:24 ID:1,1a:6:2e:b:f5:24 Lease:0x64c8e678}
	I0731 04:04:00.447833    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:82:6b:f6:39:9a:25 ID:1,82:6b:f6:39:9a:25 Lease:0x64c8e4b8}
	I0731 04:04:00.447840    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:c6:ad:93:9d:98:1b ID:1,c6:ad:93:9d:98:1b Lease:0x64c7932c}
	I0731 04:04:00.447845    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:22:a:47:ac:f4:b8 ID:1,22:a:47:ac:f4:b8 Lease:0x64c8e463}
	I0731 04:04:00.447851    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 04:04:00.447855    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 04:04:00.447861    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 04:04:00.447866    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 04:04:00.447871    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 04:04:00.447881    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 04:04:00.447886    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 04:04:00.447892    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 04:04:00.447896    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 04:04:00.447908    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 04:04:02.449904    6137 main.go:141] libmachine: Attempt 4
	I0731 04:04:02.449973    6137 main.go:141] libmachine: Searching for 42:75:97:3b:72:ea in /var/db/dhcpd_leases ...
	I0731 04:04:02.450059    6137 main.go:141] libmachine: Found 14 entries in /var/db/dhcpd_leases!
	I0731 04:04:02.450072    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:1a:6:2e:b:f5:24 ID:1,1a:6:2e:b:f5:24 Lease:0x64c8e678}
	I0731 04:04:02.450077    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:82:6b:f6:39:9a:25 ID:1,82:6b:f6:39:9a:25 Lease:0x64c8e4b8}
	I0731 04:04:02.450083    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:c6:ad:93:9d:98:1b ID:1,c6:ad:93:9d:98:1b Lease:0x64c7932c}
	I0731 04:04:02.450088    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:22:a:47:ac:f4:b8 ID:1,22:a:47:ac:f4:b8 Lease:0x64c8e463}
	I0731 04:04:02.450098    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 04:04:02.450103    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 04:04:02.450116    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 04:04:02.450122    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 04:04:02.450127    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 04:04:02.450132    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 04:04:02.450137    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 04:04:02.450145    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 04:04:02.450151    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 04:04:02.450159    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 04:04:04.450471    6137 main.go:141] libmachine: Attempt 5
	I0731 04:04:04.450486    6137 main.go:141] libmachine: Searching for 42:75:97:3b:72:ea in /var/db/dhcpd_leases ...
	I0731 04:04:04.450551    6137 main.go:141] libmachine: Found 14 entries in /var/db/dhcpd_leases!
	I0731 04:04:04.450563    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:1a:6:2e:b:f5:24 ID:1,1a:6:2e:b:f5:24 Lease:0x64c8e678}
	I0731 04:04:04.450568    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:82:6b:f6:39:9a:25 ID:1,82:6b:f6:39:9a:25 Lease:0x64c8e4b8}
	I0731 04:04:04.450573    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:c6:ad:93:9d:98:1b ID:1,c6:ad:93:9d:98:1b Lease:0x64c7932c}
	I0731 04:04:04.450578    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:22:a:47:ac:f4:b8 ID:1,22:a:47:ac:f4:b8 Lease:0x64c8e463}
	I0731 04:04:04.450583    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:ea:38:b0:75:fd:a9 ID:1,ea:38:b0:75:fd:a9 Lease:0x64c8df2b}
	I0731 04:04:04.450589    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:e2:dc:dc:90:c6:ff ID:1,e2:dc:dc:90:c6:ff Lease:0x64c8d861}
	I0731 04:04:04.450593    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:26:fc:2b:69:33:fd ID:1,26:fc:2b:69:33:fd Lease:0x64c8d84e}
	I0731 04:04:04.450600    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:aa:97:6f:dc:d:49 ID:1,aa:97:6f:dc:d:49 Lease:0x64c8d64d}
	I0731 04:04:04.450605    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:a:b0:b9:7d:6b:90 ID:1,a:b0:b9:7d:6b:90 Lease:0x64c784bf}
	I0731 04:04:04.450611    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:de:5a:e2:77:c3:8 ID:1,de:5a:e2:77:c3:8 Lease:0x64c8d3c3}
	I0731 04:04:04.450615    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:2a:64:df:b1:8f ID:1,46:2a:64:df:b1:8f Lease:0x64c8d1b4}
	I0731 04:04:04.450620    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:46:c7:68:f6:85:a0 ID:1,46:c7:68:f6:85:a0 Lease:0x64c8c804}
	I0731 04:04:04.450625    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:c7:24:b1:e3:5 ID:1,b2:c7:24:b1:e3:5 Lease:0x64c77676}
	I0731 04:04:04.450630    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:82:66:e9:c3:45:2e ID:1,82:66:e9:c3:45:2e Lease:0x64c8c3fc}
	I0731 04:04:06.452667    6137 main.go:141] libmachine: Attempt 6
	I0731 04:04:06.452705    6137 main.go:141] libmachine: Searching for 42:75:97:3b:72:ea in /var/db/dhcpd_leases ...
	I0731 04:04:06.452902    6137 main.go:141] libmachine: Found 15 entries in /var/db/dhcpd_leases!
	I0731 04:04:06.452920    6137 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.16 HWAddress:42:75:97:3b:72:ea ID:1,42:75:97:3b:72:ea Lease:0x64c8e6a5}
	I0731 04:04:06.452925    6137 main.go:141] libmachine: Found match: 42:75:97:3b:72:ea
	I0731 04:04:06.452939    6137 main.go:141] libmachine: IP: 192.168.105.16
	I0731 04:04:06.452946    6137 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.16)...
	I0731 04:04:07.457479    6137 machine.go:88] provisioning docker machine ...
	I0731 04:04:07.457504    6137 buildroot.go:166] provisioning hostname "ingress-addon-legacy-464000"
	I0731 04:04:07.457566    6137 main.go:141] libmachine: Using SSH client type: native
	I0731 04:04:07.457837    6137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100665170] 0x100667bd0 <nil>  [] 0s} 192.168.105.16 22 <nil> <nil>}
	I0731 04:04:07.457844    6137 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-464000 && echo "ingress-addon-legacy-464000" | sudo tee /etc/hostname
	I0731 04:04:07.529153    6137 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-464000
	
	I0731 04:04:07.529203    6137 main.go:141] libmachine: Using SSH client type: native
	I0731 04:04:07.529439    6137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100665170] 0x100667bd0 <nil>  [] 0s} 192.168.105.16 22 <nil> <nil>}
	I0731 04:04:07.529448    6137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-464000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-464000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-464000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 04:04:07.595254    6137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 04:04:07.595266    6137 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16968-4815/.minikube CaCertPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16968-4815/.minikube}
	I0731 04:04:07.595275    6137 buildroot.go:174] setting up certificates
	I0731 04:04:07.595286    6137 provision.go:83] configureAuth start
	I0731 04:04:07.595289    6137 provision.go:138] copyHostCerts
	I0731 04:04:07.595323    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16968-4815/.minikube/key.pem
	I0731 04:04:07.595369    6137 exec_runner.go:144] found /Users/jenkins/minikube-integration/16968-4815/.minikube/key.pem, removing ...
	I0731 04:04:07.595374    6137 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16968-4815/.minikube/key.pem
	I0731 04:04:07.595478    6137 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16968-4815/.minikube/key.pem (1675 bytes)
	I0731 04:04:07.595605    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.pem
	I0731 04:04:07.595631    6137 exec_runner.go:144] found /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.pem, removing ...
	I0731 04:04:07.595635    6137 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.pem
	I0731 04:04:07.595682    6137 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.pem (1078 bytes)
	I0731 04:04:07.595757    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cert.pem
	I0731 04:04:07.595782    6137 exec_runner.go:144] found /Users/jenkins/minikube-integration/16968-4815/.minikube/cert.pem, removing ...
	I0731 04:04:07.595786    6137 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16968-4815/.minikube/cert.pem
	I0731 04:04:07.595847    6137 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16968-4815/.minikube/cert.pem (1123 bytes)
	I0731 04:04:07.595922    6137 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-464000 san=[192.168.105.16 192.168.105.16 localhost 127.0.0.1 minikube ingress-addon-legacy-464000]
	I0731 04:04:07.717039    6137 provision.go:172] copyRemoteCerts
	I0731 04:04:07.717082    6137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 04:04:07.717090    6137 sshutil.go:53] new ssh client: &{IP:192.168.105.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/id_rsa Username:docker}
	I0731 04:04:07.751161    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 04:04:07.751213    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0731 04:04:07.757735    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 04:04:07.757773    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 04:04:07.764290    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 04:04:07.764338    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 04:04:07.771496    6137 provision.go:86] duration metric: configureAuth took 176.210625ms
	I0731 04:04:07.771502    6137 buildroot.go:189] setting minikube options for container-runtime
	I0731 04:04:07.771599    6137 config.go:182] Loaded profile config "ingress-addon-legacy-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0731 04:04:07.771636    6137 main.go:141] libmachine: Using SSH client type: native
	I0731 04:04:07.771857    6137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100665170] 0x100667bd0 <nil>  [] 0s} 192.168.105.16 22 <nil> <nil>}
	I0731 04:04:07.771862    6137 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 04:04:07.837861    6137 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 04:04:07.837871    6137 buildroot.go:70] root file system type: tmpfs
	I0731 04:04:07.837937    6137 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 04:04:07.837981    6137 main.go:141] libmachine: Using SSH client type: native
	I0731 04:04:07.838238    6137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100665170] 0x100667bd0 <nil>  [] 0s} 192.168.105.16 22 <nil> <nil>}
	I0731 04:04:07.838280    6137 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 04:04:07.905566    6137 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 04:04:07.905613    6137 main.go:141] libmachine: Using SSH client type: native
	I0731 04:04:07.905855    6137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100665170] 0x100667bd0 <nil>  [] 0s} 192.168.105.16 22 <nil> <nil>}
	I0731 04:04:07.905864    6137 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 04:04:08.225962    6137 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 04:04:08.225980    6137 machine.go:91] provisioned docker machine in 768.507292ms
	I0731 04:04:08.225985    6137 client.go:171] LocalClient.Create took 14.050335042s
	I0731 04:04:08.225997    6137 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-464000" took 14.050381583s
	I0731 04:04:08.226009    6137 start.go:300] post-start starting for "ingress-addon-legacy-464000" (driver="qemu2")
	I0731 04:04:08.226017    6137 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 04:04:08.226084    6137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 04:04:08.226094    6137 sshutil.go:53] new ssh client: &{IP:192.168.105.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/id_rsa Username:docker}
	I0731 04:04:08.259402    6137 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 04:04:08.260737    6137 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 04:04:08.260743    6137 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16968-4815/.minikube/addons for local assets ...
	I0731 04:04:08.260808    6137 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16968-4815/.minikube/files for local assets ...
	I0731 04:04:08.260924    6137 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem -> 52232.pem in /etc/ssl/certs
	I0731 04:04:08.260928    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem -> /etc/ssl/certs/52232.pem
	I0731 04:04:08.261051    6137 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 04:04:08.263888    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem --> /etc/ssl/certs/52232.pem (1708 bytes)
	I0731 04:04:08.271297    6137 start.go:303] post-start completed in 45.281208ms
	I0731 04:04:08.271682    6137 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/config.json ...
	I0731 04:04:08.271847    6137 start.go:128] duration metric: createHost completed in 14.126550333s
	I0731 04:04:08.271874    6137 main.go:141] libmachine: Using SSH client type: native
	I0731 04:04:08.272097    6137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100665170] 0x100667bd0 <nil>  [] 0s} 192.168.105.16 22 <nil> <nil>}
	I0731 04:04:08.272102    6137 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 04:04:08.337125    6137 main.go:141] libmachine: SSH cmd err, output: <nil>: 1690801448.511189585
	
	I0731 04:04:08.337135    6137 fix.go:206] guest clock: 1690801448.511189585
	I0731 04:04:08.337140    6137 fix.go:219] Guest: 2023-07-31 04:04:08.511189585 -0700 PDT Remote: 2023-07-31 04:04:08.271849 -0700 PDT m=+27.569286085 (delta=239.340585ms)
	I0731 04:04:08.337152    6137 fix.go:190] guest clock delta is within tolerance: 239.340585ms
	I0731 04:04:08.337155    6137 start.go:83] releasing machines lock for "ingress-addon-legacy-464000", held for 14.191907166s
	I0731 04:04:08.337438    6137 ssh_runner.go:195] Run: cat /version.json
	I0731 04:04:08.337448    6137 sshutil.go:53] new ssh client: &{IP:192.168.105.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/id_rsa Username:docker}
	I0731 04:04:08.337470    6137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 04:04:08.337490    6137 sshutil.go:53] new ssh client: &{IP:192.168.105.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/id_rsa Username:docker}
	I0731 04:04:08.413197    6137 ssh_runner.go:195] Run: systemctl --version
	I0731 04:04:08.415389    6137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 04:04:08.417313    6137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 04:04:08.417349    6137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 04:04:08.420539    6137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 04:04:08.425593    6137 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 04:04:08.425607    6137 start.go:466] detecting cgroup driver to use...
	I0731 04:04:08.425672    6137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 04:04:08.431776    6137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0731 04:04:08.435401    6137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 04:04:08.438629    6137 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 04:04:08.438652    6137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 04:04:08.441467    6137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 04:04:08.444412    6137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 04:04:08.447775    6137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 04:04:08.451085    6137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 04:04:08.453932    6137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 04:04:08.456654    6137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 04:04:08.459375    6137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 04:04:08.462169    6137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 04:04:08.521223    6137 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 04:04:08.529711    6137 start.go:466] detecting cgroup driver to use...
	I0731 04:04:08.529768    6137 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 04:04:08.535368    6137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 04:04:08.540135    6137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 04:04:08.547149    6137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 04:04:08.551656    6137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 04:04:08.556388    6137 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 04:04:08.590391    6137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 04:04:08.594952    6137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 04:04:08.600322    6137 ssh_runner.go:195] Run: which cri-dockerd
	I0731 04:04:08.601907    6137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 04:04:08.604456    6137 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 04:04:08.609498    6137 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 04:04:08.669519    6137 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 04:04:08.729158    6137 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 04:04:08.729174    6137 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0731 04:04:08.734275    6137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 04:04:08.793229    6137 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 04:04:09.954556    6137 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161337708s)
	I0731 04:04:09.954673    6137 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 04:04:09.969192    6137 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 04:04:09.983533    6137 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.4 ...
	I0731 04:04:09.983644    6137 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0731 04:04:09.985376    6137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 04:04:09.989793    6137 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0731 04:04:09.989860    6137 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 04:04:09.996259    6137 docker.go:636] Got preloaded images: 
	I0731 04:04:09.996273    6137 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0731 04:04:09.996315    6137 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 04:04:09.999758    6137 ssh_runner.go:195] Run: which lz4
	I0731 04:04:10.000966    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0731 04:04:10.001071    6137 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 04:04:10.002425    6137 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 04:04:10.002439    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0731 04:04:11.688060    6137 docker.go:600] Took 1.687084 seconds to copy over tarball
	I0731 04:04:11.688124    6137 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 04:04:12.961827    6137 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.273718708s)
	I0731 04:04:12.961839    6137 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 04:04:12.985448    6137 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 04:04:12.992516    6137 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0731 04:04:13.002227    6137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 04:04:13.069970    6137 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 04:04:14.520346    6137 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.450392084s)
	I0731 04:04:14.520438    6137 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 04:04:14.526658    6137 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0731 04:04:14.526667    6137 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0731 04:04:14.526671    6137 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 04:04:14.584589    6137 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 04:04:14.587376    6137 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 04:04:14.587524    6137 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 04:04:14.587579    6137 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0731 04:04:14.587626    6137 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0731 04:04:14.588465    6137 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0731 04:04:14.588659    6137 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0731 04:04:14.588712    6137 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0731 04:04:14.590963    6137 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 04:04:14.592766    6137 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 04:04:14.594894    6137 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0731 04:04:14.594931    6137 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0731 04:04:14.594935    6137 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0731 04:04:14.595018    6137 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 04:04:14.595041    6137 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0731 04:04:14.595064    6137 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	W0731 04:04:15.806892    6137 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0731 04:04:15.806996    6137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 04:04:15.813486    6137 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0731 04:04:15.813517    6137 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 04:04:15.813577    6137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 04:04:15.822037    6137 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0731 04:04:15.841324    6137 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0731 04:04:15.841425    6137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0731 04:04:15.849155    6137 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0731 04:04:15.849175    6137 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0731 04:04:15.849222    6137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0731 04:04:15.855689    6137 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0731 04:04:15.859976    6137 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 04:04:15.860059    6137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 04:04:15.866417    6137 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 04:04:15.866440    6137 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 04:04:15.866502    6137 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 04:04:15.876709    6137 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0731 04:04:16.116417    6137 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0731 04:04:16.116536    6137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0731 04:04:16.122509    6137 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0731 04:04:16.122534    6137 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0731 04:04:16.122584    6137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0731 04:04:16.128333    6137 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0731 04:04:16.144748    6137 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0731 04:04:16.144846    6137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0731 04:04:16.150302    6137 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0731 04:04:16.150322    6137 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0731 04:04:16.150369    6137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0731 04:04:16.157779    6137 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0731 04:04:16.289565    6137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 04:04:16.296094    6137 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0731 04:04:16.296121    6137 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0731 04:04:16.296168    6137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0731 04:04:16.309975    6137 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0731 04:04:16.478141    6137 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0731 04:04:16.478556    6137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0731 04:04:16.495810    6137 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0731 04:04:16.495850    6137 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0731 04:04:16.495939    6137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0731 04:04:16.507061    6137 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0731 04:04:16.676433    6137 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0731 04:04:16.676947    6137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0731 04:04:16.700407    6137 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0731 04:04:16.700459    6137 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0731 04:04:16.700600    6137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0731 04:04:16.715993    6137 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0731 04:04:16.716059    6137 cache_images.go:92] LoadImages completed in 2.189430833s
	W0731 04:04:16.716127    6137 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I0731 04:04:16.716230    6137 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 04:04:16.732268    6137 cni.go:84] Creating CNI manager for ""
	I0731 04:04:16.732289    6137 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 04:04:16.732312    6137 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 04:04:16.732327    6137 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.16 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-464000 NodeName:ingress-addon-legacy-464000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 04:04:16.732496    6137 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-464000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 04:04:16.732570    6137 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-464000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-464000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 04:04:16.732683    6137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0731 04:04:16.737961    6137 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 04:04:16.738013    6137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 04:04:16.742140    6137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (356 bytes)
	I0731 04:04:16.749464    6137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0731 04:04:16.756224    6137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2130 bytes)
	I0731 04:04:16.763379    6137 ssh_runner.go:195] Run: grep 192.168.105.16	control-plane.minikube.internal$ /etc/hosts
	I0731 04:04:16.764963    6137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 04:04:16.768966    6137 certs.go:56] Setting up /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000 for IP: 192.168.105.16
	I0731 04:04:16.768978    6137 certs.go:190] acquiring lock for shared ca certs: {Name:mk645bb5ce6691935288c693436a38a3c4bde2b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:04:16.769320    6137 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.key
	I0731 04:04:16.769506    6137 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.key
	I0731 04:04:16.769534    6137 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.key
	I0731 04:04:16.769541    6137 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt with IP's: []
	I0731 04:04:16.817343    6137 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt ...
	I0731 04:04:16.817347    6137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: {Name:mkda121ed2256df32cfb38d99efd71c1e88f0296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:04:16.817528    6137 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.key ...
	I0731 04:04:16.817532    6137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.key: {Name:mkd85eeb0bf6a5d56a5aee5659bd9538e4a5c1a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:04:16.817644    6137 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.key.222c529e
	I0731 04:04:16.817652    6137 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.crt.222c529e with IP's: [192.168.105.16 10.96.0.1 127.0.0.1 10.0.0.1]
	I0731 04:04:16.890193    6137 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.crt.222c529e ...
	I0731 04:04:16.890197    6137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.crt.222c529e: {Name:mk1aac5cb25bf28a93b4fa9fa26e43f3ae9c6f4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:04:16.890334    6137 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.key.222c529e ...
	I0731 04:04:16.890337    6137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.key.222c529e: {Name:mk4e2cbfa1ec90c9a39eede8918784d5c9cc3a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:04:16.890438    6137 certs.go:337] copying /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.crt.222c529e -> /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.crt
	I0731 04:04:16.890524    6137 certs.go:341] copying /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.key.222c529e -> /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.key
	I0731 04:04:16.890609    6137 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/proxy-client.key
	I0731 04:04:16.890615    6137 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/proxy-client.crt with IP's: []
	I0731 04:04:17.024845    6137 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/proxy-client.crt ...
	I0731 04:04:17.024851    6137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/proxy-client.crt: {Name:mk1a5937149e03c931aaa1a6c855b2118f605b1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:04:17.025031    6137 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/proxy-client.key ...
	I0731 04:04:17.025034    6137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/proxy-client.key: {Name:mk82317970a83439de9978698dafcea544f097f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:04:17.025161    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 04:04:17.025183    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 04:04:17.025196    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 04:04:17.025213    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 04:04:17.025230    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 04:04:17.025243    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 04:04:17.025255    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 04:04:17.025267    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 04:04:17.025365    6137 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/5223.pem (1338 bytes)
	W0731 04:04:17.025548    6137 certs.go:433] ignoring /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/5223_empty.pem, impossibly tiny 0 bytes
	I0731 04:04:17.025561    6137 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 04:04:17.025583    6137 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem (1078 bytes)
	I0731 04:04:17.025607    6137 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem (1123 bytes)
	I0731 04:04:17.025636    6137 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/certs/key.pem (1675 bytes)
	I0731 04:04:17.025684    6137 certs.go:437] found cert: /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem (1708 bytes)
	I0731 04:04:17.025707    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 04:04:17.025733    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/5223.pem -> /usr/share/ca-certificates/5223.pem
	I0731 04:04:17.025746    6137 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem -> /usr/share/ca-certificates/52232.pem
	I0731 04:04:17.026125    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 04:04:17.033782    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 04:04:17.040511    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 04:04:17.047679    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 04:04:17.055296    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 04:04:17.062626    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 04:04:17.069544    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 04:04:17.076235    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 04:04:17.083501    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 04:04:17.090583    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/5223.pem --> /usr/share/ca-certificates/5223.pem (1338 bytes)
	I0731 04:04:17.097266    6137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/ssl/certs/52232.pem --> /usr/share/ca-certificates/52232.pem (1708 bytes)
	I0731 04:04:17.104029    6137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 04:04:17.109501    6137 ssh_runner.go:195] Run: openssl version
	I0731 04:04:17.111417    6137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 04:04:17.114987    6137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 04:04:17.116678    6137 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 10:54 /usr/share/ca-certificates/minikubeCA.pem
	I0731 04:04:17.116699    6137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 04:04:17.118455    6137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 04:04:17.121271    6137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5223.pem && ln -fs /usr/share/ca-certificates/5223.pem /etc/ssl/certs/5223.pem"
	I0731 04:04:17.124462    6137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5223.pem
	I0731 04:04:17.126241    6137 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 31 10:55 /usr/share/ca-certificates/5223.pem
	I0731 04:04:17.126263    6137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5223.pem
	I0731 04:04:17.128009    6137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5223.pem /etc/ssl/certs/51391683.0"
	I0731 04:04:17.131432    6137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/52232.pem && ln -fs /usr/share/ca-certificates/52232.pem /etc/ssl/certs/52232.pem"
	I0731 04:04:17.134836    6137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/52232.pem
	I0731 04:04:17.136405    6137 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 31 10:55 /usr/share/ca-certificates/52232.pem
	I0731 04:04:17.136425    6137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/52232.pem
	I0731 04:04:17.138434    6137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/52232.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 04:04:17.141451    6137 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 04:04:17.142793    6137 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 04:04:17.142823    6137 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy
-464000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.16 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:04:17.142893    6137 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 04:04:17.148417    6137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 04:04:17.151846    6137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 04:04:17.154952    6137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 04:04:17.157568    6137 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 04:04:17.157581    6137 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0731 04:04:17.183557    6137 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0731 04:04:17.183584    6137 kubeadm.go:322] [preflight] Running pre-flight checks
	I0731 04:04:17.265702    6137 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 04:04:17.265755    6137 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 04:04:17.265824    6137 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 04:04:17.311423    6137 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 04:04:17.312234    6137 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 04:04:17.312317    6137 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0731 04:04:17.381515    6137 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 04:04:17.392164    6137 out.go:204]   - Generating certificates and keys ...
	I0731 04:04:17.392205    6137 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0731 04:04:17.392241    6137 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0731 04:04:17.432398    6137 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 04:04:17.462073    6137 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0731 04:04:17.530414    6137 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0731 04:04:17.739944    6137 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0731 04:04:17.797402    6137 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0731 04:04:17.797503    6137 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-464000 localhost] and IPs [192.168.105.16 127.0.0.1 ::1]
	I0731 04:04:17.838743    6137 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0731 04:04:17.838807    6137 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-464000 localhost] and IPs [192.168.105.16 127.0.0.1 ::1]
	I0731 04:04:18.057257    6137 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 04:04:18.169907    6137 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 04:04:18.232453    6137 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0731 04:04:18.232501    6137 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 04:04:18.421649    6137 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 04:04:18.652136    6137 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 04:04:18.705715    6137 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 04:04:18.762162    6137 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 04:04:18.762368    6137 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 04:04:18.766820    6137 out.go:204]   - Booting up control plane ...
	I0731 04:04:18.766863    6137 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 04:04:18.773047    6137 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 04:04:18.773584    6137 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 04:04:18.774003    6137 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 04:04:18.775352    6137 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 04:04:30.279831    6137 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.504168 seconds
	I0731 04:04:30.280083    6137 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 04:04:30.304857    6137 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 04:04:30.820994    6137 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 04:04:30.821280    6137 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-464000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0731 04:04:31.325831    6137 kubeadm.go:322] [bootstrap-token] Using token: x1tp1y.doemg1xypwardp7q
	I0731 04:04:31.329802    6137 out.go:204]   - Configuring RBAC rules ...
	I0731 04:04:31.329905    6137 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 04:04:31.330020    6137 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 04:04:31.335963    6137 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 04:04:31.337066    6137 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 04:04:31.338192    6137 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 04:04:31.339479    6137 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 04:04:31.345810    6137 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 04:04:31.528870    6137 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0731 04:04:31.741745    6137 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0731 04:04:31.742974    6137 kubeadm.go:322] 
	I0731 04:04:31.743064    6137 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0731 04:04:31.743071    6137 kubeadm.go:322] 
	I0731 04:04:31.743119    6137 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0731 04:04:31.743124    6137 kubeadm.go:322] 
	I0731 04:04:31.743139    6137 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0731 04:04:31.743176    6137 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 04:04:31.743209    6137 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 04:04:31.743214    6137 kubeadm.go:322] 
	I0731 04:04:31.743263    6137 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0731 04:04:31.743346    6137 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 04:04:31.743418    6137 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 04:04:31.743423    6137 kubeadm.go:322] 
	I0731 04:04:31.743487    6137 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 04:04:31.743563    6137 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0731 04:04:31.743568    6137 kubeadm.go:322] 
	I0731 04:04:31.743642    6137 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token x1tp1y.doemg1xypwardp7q \
	I0731 04:04:31.743738    6137 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:92b0556b3b6f92a8481720bc11b5f636fd40e93120aabd046ff70f77047ec2aa \
	I0731 04:04:31.743772    6137 kubeadm.go:322]     --control-plane 
	I0731 04:04:31.743777    6137 kubeadm.go:322] 
	I0731 04:04:31.743851    6137 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0731 04:04:31.743857    6137 kubeadm.go:322] 
	I0731 04:04:31.743985    6137 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token x1tp1y.doemg1xypwardp7q \
	I0731 04:04:31.744070    6137 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:92b0556b3b6f92a8481720bc11b5f636fd40e93120aabd046ff70f77047ec2aa 
	I0731 04:04:31.744223    6137 kubeadm.go:322] W0731 11:04:17.357216    1413 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0731 04:04:31.744415    6137 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0731 04:04:31.744540    6137 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
	I0731 04:04:31.744646    6137 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 04:04:31.744757    6137 kubeadm.go:322] W0731 11:04:18.947373    1413 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0731 04:04:31.744900    6137 kubeadm.go:322] W0731 11:04:18.948041    1413 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0731 04:04:31.744912    6137 cni.go:84] Creating CNI manager for ""
	I0731 04:04:31.744925    6137 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 04:04:31.744939    6137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 04:04:31.745044    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:31.745046    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35 minikube.k8s.io/name=ingress-addon-legacy-464000 minikube.k8s.io/updated_at=2023_07_31T04_04_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:31.751313    6137 ops.go:34] apiserver oom_adj: -16
	I0731 04:04:31.819087    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:31.855306    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:32.391064    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:32.891064    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:33.391070    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:33.890987    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:34.391057    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:34.891022    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:35.390990    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:35.890961    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:36.391017    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:36.890763    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:37.390950    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:37.890940    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:38.390667    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:38.890761    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:39.391005    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:39.890663    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:40.390930    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:40.890899    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:41.390857    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:41.890430    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:42.390839    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:42.890675    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:43.390822    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:43.890857    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:44.390669    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:44.890599    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:45.390777    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:45.890647    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:46.390799    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:46.890567    6137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 04:04:46.971253    6137 kubeadm.go:1081] duration metric: took 15.226651667s to wait for elevateKubeSystemPrivileges.
	I0731 04:04:46.971274    6137 kubeadm.go:406] StartCluster complete in 29.829125209s
	I0731 04:04:46.971283    6137 settings.go:142] acquiring lock: {Name:mk7e2067b9c26be8d46dc95ba3a8a7ad946cadb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:04:46.971365    6137 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:04:46.971944    6137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/kubeconfig: {Name:mk98971837606256b8bab3d325e05dbfd512b496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:04:46.972126    6137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 04:04:46.972166    6137 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0731 04:04:46.972209    6137 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-464000"
	I0731 04:04:46.972218    6137 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-464000"
	I0731 04:04:46.972249    6137 host.go:66] Checking if "ingress-addon-legacy-464000" exists ...
	I0731 04:04:46.972251    6137 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-464000"
	I0731 04:04:46.972268    6137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-464000"
	I0731 04:04:46.972352    6137 config.go:182] Loaded profile config "ingress-addon-legacy-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0731 04:04:46.972372    6137 kapi.go:59] client config for ingress-addon-legacy-464000: &rest.Config{Host:"https://192.168.105.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.key", CAFile:"/Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uin
t8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1016c1bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 04:04:46.972976    6137 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 04:04:46.973329    6137 kapi.go:59] client config for ingress-addon-legacy-464000: &rest.Config{Host:"https://192.168.105.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.key", CAFile:"/Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uin
t8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1016c1bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 04:04:46.977436    6137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 04:04:46.980443    6137 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 04:04:46.980448    6137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 04:04:46.980455    6137 sshutil.go:53] new ssh client: &{IP:192.168.105.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/id_rsa Username:docker}
	I0731 04:04:46.984208    6137 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-464000"
	I0731 04:04:46.984227    6137 host.go:66] Checking if "ingress-addon-legacy-464000" exists ...
	I0731 04:04:46.984909    6137 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 04:04:46.984916    6137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 04:04:46.984921    6137 sshutil.go:53] new ssh client: &{IP:192.168.105.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/ingress-addon-legacy-464000/id_rsa Username:docker}
	I0731 04:04:46.988802    6137 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-464000" context rescaled to 1 replicas
	I0731 04:04:46.988817    6137 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.16 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:04:46.992361    6137 out.go:177] * Verifying Kubernetes components...
	I0731 04:04:46.999437    6137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 04:04:47.024084    6137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 04:04:47.024344    6137 kapi.go:59] client config for ingress-addon-legacy-464000: &rest.Config{Host:"https://192.168.105.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.key", CAFile:"/Users/jenkins/minikube-integration/16968-4815/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uin
t8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1016c1bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 04:04:47.024486    6137 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-464000" to be "Ready" ...
	I0731 04:04:47.025745    6137 node_ready.go:49] node "ingress-addon-legacy-464000" has status "Ready":"True"
	I0731 04:04:47.025751    6137 node_ready.go:38] duration metric: took 1.258167ms waiting for node "ingress-addon-legacy-464000" to be "Ready" ...
	I0731 04:04:47.025757    6137 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 04:04:47.028947    6137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 04:04:47.029098    6137 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-464000" in "kube-system" namespace to be "Ready" ...
	I0731 04:04:47.078241    6137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 04:04:47.320471    6137 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0731 04:04:47.367453    6137 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 04:04:47.373303    6137 addons.go:502] enable addons completed in 401.145417ms: enabled=[storage-provisioner default-storageclass]
	I0731 04:04:48.551999    6137 pod_ready.go:92] pod "etcd-ingress-addon-legacy-464000" in "kube-system" namespace has status "Ready":"True"
	I0731 04:04:48.552036    6137 pod_ready.go:81] duration metric: took 1.522961417s waiting for pod "etcd-ingress-addon-legacy-464000" in "kube-system" namespace to be "Ready" ...
	I0731 04:04:48.552062    6137 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-464000" in "kube-system" namespace to be "Ready" ...
	I0731 04:04:48.563476    6137 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-464000" in "kube-system" namespace has status "Ready":"True"
	I0731 04:04:48.563496    6137 pod_ready.go:81] duration metric: took 11.421542ms waiting for pod "kube-apiserver-ingress-addon-legacy-464000" in "kube-system" namespace to be "Ready" ...
	I0731 04:04:48.563508    6137 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-464000" in "kube-system" namespace to be "Ready" ...
	I0731 04:04:48.569689    6137 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-464000" in "kube-system" namespace has status "Ready":"True"
	I0731 04:04:48.569702    6137 pod_ready.go:81] duration metric: took 6.185167ms waiting for pod "kube-controller-manager-ingress-addon-legacy-464000" in "kube-system" namespace to be "Ready" ...
	I0731 04:04:48.569713    6137 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gjm75" in "kube-system" namespace to be "Ready" ...
	I0731 04:04:48.624853    6137 request.go:628] Waited for 53.062292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.16:8443/api/v1/nodes/ingress-addon-legacy-464000
	I0731 04:04:48.627482    6137 pod_ready.go:92] pod "kube-proxy-gjm75" in "kube-system" namespace has status "Ready":"True"
	I0731 04:04:48.627494    6137 pod_ready.go:81] duration metric: took 57.775417ms waiting for pod "kube-proxy-gjm75" in "kube-system" namespace to be "Ready" ...
	I0731 04:04:48.627502    6137 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-464000" in "kube-system" namespace to be "Ready" ...
	I0731 04:04:48.826585    6137 request.go:628] Waited for 199.005292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-464000
	I0731 04:04:49.026670    6137 request.go:628] Waited for 194.492167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.16:8443/api/v1/nodes/ingress-addon-legacy-464000
	I0731 04:04:49.034377    6137 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-464000" in "kube-system" namespace has status "Ready":"True"
	I0731 04:04:49.034404    6137 pod_ready.go:81] duration metric: took 406.899167ms waiting for pod "kube-scheduler-ingress-addon-legacy-464000" in "kube-system" namespace to be "Ready" ...
	I0731 04:04:49.034422    6137 pod_ready.go:38] duration metric: took 2.008701375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 04:04:49.034475    6137 api_server.go:52] waiting for apiserver process to appear ...
	I0731 04:04:49.034779    6137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 04:04:49.051030    6137 api_server.go:72] duration metric: took 2.06223775s to wait for apiserver process to appear ...
	I0731 04:04:49.051055    6137 api_server.go:88] waiting for apiserver healthz status ...
	I0731 04:04:49.051079    6137 api_server.go:253] Checking apiserver healthz at https://192.168.105.16:8443/healthz ...
	I0731 04:04:49.059908    6137 api_server.go:279] https://192.168.105.16:8443/healthz returned 200:
	ok
	I0731 04:04:49.061027    6137 api_server.go:141] control plane version: v1.18.20
	I0731 04:04:49.061044    6137 api_server.go:131] duration metric: took 9.982125ms to wait for apiserver health ...
	I0731 04:04:49.061051    6137 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 04:04:49.226588    6137 request.go:628] Waited for 165.432208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.16:8443/api/v1/namespaces/kube-system/pods
	I0731 04:04:49.239401    6137 system_pods.go:59] 7 kube-system pods found
	I0731 04:04:49.239442    6137 system_pods.go:61] "coredns-66bff467f8-wnp9c" [c13f2045-0d5d-48a9-aeb1-e5acc6680ec4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 04:04:49.239461    6137 system_pods.go:61] "etcd-ingress-addon-legacy-464000" [e1c37d86-f183-443e-a3b6-e89238eed6a3] Running
	I0731 04:04:49.239477    6137 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-464000" [2f49a208-e22c-4d83-8a7f-e9deb86fef9a] Running
	I0731 04:04:49.239491    6137 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-464000" [ca6e17a2-71e3-46b5-90c6-399578e6171d] Running
	I0731 04:04:49.239509    6137 system_pods.go:61] "kube-proxy-gjm75" [e9fd0e0e-630e-45ae-909f-4f830b0452a5] Running
	I0731 04:04:49.239522    6137 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-464000" [90e37ca9-e179-44d5-a443-c68af0278219] Running
	I0731 04:04:49.239535    6137 system_pods.go:61] "storage-provisioner" [d73eaf6d-558d-4f3c-a9e7-ed2ffe42d3e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 04:04:49.239568    6137 system_pods.go:74] duration metric: took 178.507416ms to wait for pod list to return data ...
	I0731 04:04:49.239585    6137 default_sa.go:34] waiting for default service account to be created ...
	I0731 04:04:49.426572    6137 request.go:628] Waited for 186.849125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.16:8443/api/v1/namespaces/default/serviceaccounts
	I0731 04:04:49.433546    6137 default_sa.go:45] found service account: "default"
	I0731 04:04:49.433584    6137 default_sa.go:55] duration metric: took 193.990167ms for default service account to be created ...
	I0731 04:04:49.433602    6137 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 04:04:49.626116    6137 request.go:628] Waited for 192.444958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.16:8443/api/v1/namespaces/kube-system/pods
	I0731 04:04:49.634009    6137 system_pods.go:86] 7 kube-system pods found
	I0731 04:04:49.634030    6137 system_pods.go:89] "coredns-66bff467f8-wnp9c" [c13f2045-0d5d-48a9-aeb1-e5acc6680ec4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 04:04:49.634041    6137 system_pods.go:89] "etcd-ingress-addon-legacy-464000" [e1c37d86-f183-443e-a3b6-e89238eed6a3] Running
	I0731 04:04:49.634049    6137 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-464000" [2f49a208-e22c-4d83-8a7f-e9deb86fef9a] Running
	I0731 04:04:49.634057    6137 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-464000" [ca6e17a2-71e3-46b5-90c6-399578e6171d] Running
	I0731 04:04:49.634064    6137 system_pods.go:89] "kube-proxy-gjm75" [e9fd0e0e-630e-45ae-909f-4f830b0452a5] Running
	I0731 04:04:49.634072    6137 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-464000" [90e37ca9-e179-44d5-a443-c68af0278219] Running
	I0731 04:04:49.634080    6137 system_pods.go:89] "storage-provisioner" [d73eaf6d-558d-4f3c-a9e7-ed2ffe42d3e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 04:04:49.634090    6137 system_pods.go:126] duration metric: took 200.484708ms to wait for k8s-apps to be running ...
	I0731 04:04:49.634099    6137 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 04:04:49.634224    6137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 04:04:49.646834    6137 system_svc.go:56] duration metric: took 12.726875ms WaitForService to wait for kubelet.
	I0731 04:04:49.646849    6137 kubeadm.go:581] duration metric: took 2.65808025s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 04:04:49.646869    6137 node_conditions.go:102] verifying NodePressure condition ...
	I0731 04:04:49.826534    6137 request.go:628] Waited for 179.585125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.16:8443/api/v1/nodes
	I0731 04:04:49.836523    6137 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0731 04:04:49.836579    6137 node_conditions.go:123] node cpu capacity is 2
	I0731 04:04:49.836600    6137 node_conditions.go:105] duration metric: took 189.726125ms to run NodePressure ...
	I0731 04:04:49.836617    6137 start.go:228] waiting for startup goroutines ...
	I0731 04:04:49.836631    6137 start.go:233] waiting for cluster config update ...
	I0731 04:04:49.836650    6137 start.go:242] writing updated cluster config ...
	I0731 04:04:49.837861    6137 ssh_runner.go:195] Run: rm -f paused
	I0731 04:04:49.966273    6137 start.go:596] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0731 04:04:49.970700    6137 out.go:177] 
	W0731 04:04:49.974770    6137 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0731 04:04:49.977668    6137 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0731 04:04:49.985635    6137 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-464000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-07-31 11:04:05 UTC, ends at Mon 2023-07-31 11:05:55 UTC. --
	Jul 31 11:05:31 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:31.826029231Z" level=info msg="shim disconnected" id=1379fe344484f2e9f9cd1cfe5585c97f8f0cdbf67b80839b49f5186f0e06f551 namespace=moby
	Jul 31 11:05:31 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:31.826057647Z" level=warning msg="cleaning up after shim disconnected" id=1379fe344484f2e9f9cd1cfe5585c97f8f0cdbf67b80839b49f5186f0e06f551 namespace=moby
	Jul 31 11:05:31 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:31.826061938Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 11:05:44 ingress-addon-legacy-464000 dockerd[1068]: time="2023-07-31T11:05:44.163218310Z" level=info msg="ignoring event" container=b2ecd3f6ff49bdfa275f8d7af437396b7bfc8e65b6104227b35ccf32c9efe5ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 11:05:44 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:44.163519512Z" level=info msg="shim disconnected" id=b2ecd3f6ff49bdfa275f8d7af437396b7bfc8e65b6104227b35ccf32c9efe5ba namespace=moby
	Jul 31 11:05:44 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:44.163561261Z" level=warning msg="cleaning up after shim disconnected" id=b2ecd3f6ff49bdfa275f8d7af437396b7bfc8e65b6104227b35ccf32c9efe5ba namespace=moby
	Jul 31 11:05:44 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:44.163567761Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 11:05:46 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:46.211680288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 11:05:46 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:46.211747745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:05:46 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:46.211763911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 11:05:46 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:46.211774411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 11:05:46 ingress-addon-legacy-464000 dockerd[1068]: time="2023-07-31T11:05:46.248696806Z" level=info msg="ignoring event" container=7163e78a77afecc782520576109756a9df3b0d965413cd1356adb83d4abb0f59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 11:05:46 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:46.248976966Z" level=info msg="shim disconnected" id=7163e78a77afecc782520576109756a9df3b0d965413cd1356adb83d4abb0f59 namespace=moby
	Jul 31 11:05:46 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:46.249005424Z" level=warning msg="cleaning up after shim disconnected" id=7163e78a77afecc782520576109756a9df3b0d965413cd1356adb83d4abb0f59 namespace=moby
	Jul 31 11:05:46 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:46.249009841Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 11:05:50 ingress-addon-legacy-464000 dockerd[1068]: time="2023-07-31T11:05:50.660013515Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=b3c44abbd72bb255a027c48ffbafd51aceb8ae1f73df7c9081b6a72d86211dbe
	Jul 31 11:05:50 ingress-addon-legacy-464000 dockerd[1068]: time="2023-07-31T11:05:50.669923278Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=b3c44abbd72bb255a027c48ffbafd51aceb8ae1f73df7c9081b6a72d86211dbe
	Jul 31 11:05:50 ingress-addon-legacy-464000 dockerd[1068]: time="2023-07-31T11:05:50.742318224Z" level=info msg="ignoring event" container=b3c44abbd72bb255a027c48ffbafd51aceb8ae1f73df7c9081b6a72d86211dbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 11:05:50 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:50.742565969Z" level=info msg="shim disconnected" id=b3c44abbd72bb255a027c48ffbafd51aceb8ae1f73df7c9081b6a72d86211dbe namespace=moby
	Jul 31 11:05:50 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:50.742639301Z" level=warning msg="cleaning up after shim disconnected" id=b3c44abbd72bb255a027c48ffbafd51aceb8ae1f73df7c9081b6a72d86211dbe namespace=moby
	Jul 31 11:05:50 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:50.742650550Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 11:05:50 ingress-addon-legacy-464000 dockerd[1068]: time="2023-07-31T11:05:50.787513915Z" level=info msg="ignoring event" container=d7d92adba007442963a5456f3a5ae5361b2c8327b9dcbb7f44765dc35d82fcad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 11:05:50 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:50.787723119Z" level=info msg="shim disconnected" id=d7d92adba007442963a5456f3a5ae5361b2c8327b9dcbb7f44765dc35d82fcad namespace=moby
	Jul 31 11:05:50 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:50.788050029Z" level=warning msg="cleaning up after shim disconnected" id=d7d92adba007442963a5456f3a5ae5361b2c8327b9dcbb7f44765dc35d82fcad namespace=moby
	Jul 31 11:05:50 ingress-addon-legacy-464000 dockerd[1074]: time="2023-07-31T11:05:50.788061404Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	7163e78a77afe       13753a81eccfd                                                                                                      9 seconds ago        Exited              hello-world-app           2                   afa45ff4084c0
	57ae372bd2aa3       nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                                      35 seconds ago       Running             nginx                     0                   022bc7f3b3ee3
	b3c44abbd72bb       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   48 seconds ago       Exited              controller                0                   d7d92adba0074
	bb7deee0917ed       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   42e989f870897
	a214fce3aaff6       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   969a2416df525
	e153b178e6703       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   596719cb93234
	3bc9c3215e83d       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   af9d3313be5bc
	69811b1c49b1c       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   0da4e4146ce92
	1da437fa47774       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   09fe98ddb04b5
	61eb610e91168       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   5faed790deb5a
	cc1502d13d216       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   605b5cd4ea205
	addccea5c6018       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   2587355dce800
	
	* 
	* ==> coredns [3bc9c3215e83] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = cb78e7fd356afb50fc9964e5378f29cc
	[INFO] Reloading complete
	I0731 11:05:17.693217       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-07-31 11:04:47.692567295 +0000 UTC m=+0.012989121) (total time: 30.000389404s):
	Trace[2019727887]: [30.000389404s] [30.000389404s] END
	E0731 11:05:17.693230       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0731 11:05:17.693863       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-07-31 11:04:47.69293038 +0000 UTC m=+0.013352206) (total time: 30.000923499s):
	Trace[1427131847]: [30.000923499s] [30.000923499s] END
	E0731 11:05:17.693875       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0731 11:05:17.693950       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-07-31 11:04:47.693769762 +0000 UTC m=+0.014191589) (total time: 30.000175776s):
	Trace[939984059]: [30.000175776s] [30.000175776s] END
	E0731 11:05:17.693957       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-464000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-464000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35
	                    minikube.k8s.io/name=ingress-addon-legacy-464000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_31T04_04_31_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 11:04:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-464000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 11:05:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 11:05:38 +0000   Mon, 31 Jul 2023 11:04:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 11:05:38 +0000   Mon, 31 Jul 2023 11:04:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 11:05:38 +0000   Mon, 31 Jul 2023 11:04:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 11:05:38 +0000   Mon, 31 Jul 2023 11:04:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.16
	  Hostname:    ingress-addon-legacy-464000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 a71bc9f73c8b4cab9f1a503bf76546ab
	  System UUID:                a71bc9f73c8b4cab9f1a503bf76546ab
	  Boot ID:                    c3f97070-c22f-4627-a224-5113e306c76f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-cmvt8                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 coredns-66bff467f8-wnp9c                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     68s
	  kube-system                 etcd-ingress-addon-legacy-464000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-apiserver-ingress-addon-legacy-464000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-464000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-gjm75                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-scheduler-ingress-addon-legacy-464000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 77s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  77s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  77s   kubelet     Node ingress-addon-legacy-464000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s   kubelet     Node ingress-addon-legacy-464000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s   kubelet     Node ingress-addon-legacy-464000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                77s   kubelet     Node ingress-addon-legacy-464000 status is now: NodeReady
	  Normal  Starting                 68s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jul31 11:04] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.632844] EINJ: EINJ table not found.
	[  +0.487531] systemd-fstab-generator[116]: Ignoring "noauto" for root device
	[  +0.043314] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000793] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.171305] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.058561] systemd-fstab-generator[492]: Ignoring "noauto" for root device
	[  +0.426507] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[  +0.149802] systemd-fstab-generator[747]: Ignoring "noauto" for root device
	[  +0.057681] systemd-fstab-generator[758]: Ignoring "noauto" for root device
	[  +0.066153] systemd-fstab-generator[771]: Ignoring "noauto" for root device
	[  +1.144671] kauditd_printk_skb: 17 callbacks suppressed
	[  +3.127750] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +4.304210] systemd-fstab-generator[1533]: Ignoring "noauto" for root device
	[  +7.812960] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.106942] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +6.117163] systemd-fstab-generator[2599]: Ignoring "noauto" for root device
	[ +16.036992] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.854310] kauditd_printk_skb: 13 callbacks suppressed
	[  +3.332945] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Jul31 11:05] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [addccea5c601] <==
	* raft2023/07/31 11:04:26 INFO: fe7873a2f8dc9fac became follower at term 0
	raft2023/07/31 11:04:26 INFO: newRaft fe7873a2f8dc9fac [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/07/31 11:04:26 INFO: fe7873a2f8dc9fac became follower at term 1
	raft2023/07/31 11:04:26 INFO: fe7873a2f8dc9fac switched to configuration voters=(18336533026636079020)
	2023-07-31 11:04:26.655673 W | auth: simple token is not cryptographically signed
	2023-07-31 11:04:26.715941 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-07-31 11:04:26.727678 I | etcdserver: fe7873a2f8dc9fac as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-07-31 11:04:26.728087 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-31 11:04:26.728155 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-31 11:04:26.728245 I | embed: listening for peers on 192.168.105.16:2380
	raft2023/07/31 11:04:26 INFO: fe7873a2f8dc9fac switched to configuration voters=(18336533026636079020)
	2023-07-31 11:04:26.728395 I | etcdserver/membership: added member fe7873a2f8dc9fac [https://192.168.105.16:2380] to cluster 52d74519557dac6b
	raft2023/07/31 11:04:27 INFO: fe7873a2f8dc9fac is starting a new election at term 1
	raft2023/07/31 11:04:27 INFO: fe7873a2f8dc9fac became candidate at term 2
	raft2023/07/31 11:04:27 INFO: fe7873a2f8dc9fac received MsgVoteResp from fe7873a2f8dc9fac at term 2
	raft2023/07/31 11:04:27 INFO: fe7873a2f8dc9fac became leader at term 2
	raft2023/07/31 11:04:27 INFO: raft.node: fe7873a2f8dc9fac elected leader fe7873a2f8dc9fac at term 2
	2023-07-31 11:04:27.401206 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-31 11:04:27.403159 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-31 11:04:27.403264 I | etcdserver/api: enabled capabilities for version 3.4
	2023-07-31 11:04:27.403349 I | etcdserver: published {Name:ingress-addon-legacy-464000 ClientURLs:[https://192.168.105.16:2379]} to cluster 52d74519557dac6b
	2023-07-31 11:04:27.403585 I | embed: ready to serve client requests
	2023-07-31 11:04:27.407016 I | embed: serving client requests on 192.168.105.16:2379
	2023-07-31 11:04:27.407898 I | embed: ready to serve client requests
	2023-07-31 11:04:27.410389 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  11:05:55 up 1 min,  0 users,  load average: 0.35, 0.17, 0.06
	Linux ingress-addon-legacy-464000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [cc1502d13d21] <==
	* I0731 11:04:28.920251       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0731 11:04:28.922139       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.16, ResourceVersion: 0, AdditionalErrorMsg: 
	I0731 11:04:29.001610       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 11:04:29.002160       1 cache.go:39] Caches are synced for autoregister controller
	I0731 11:04:29.002405       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0731 11:04:29.005395       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 11:04:29.005456       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0731 11:04:29.903835       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0731 11:04:29.903909       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 11:04:29.919095       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0731 11:04:29.928153       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0731 11:04:29.928187       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0731 11:04:30.063228       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 11:04:30.073312       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0731 11:04:30.180334       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.16]
	I0731 11:04:30.180778       1 controller.go:609] quota admission added evaluator for: endpoints
	I0731 11:04:30.182504       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 11:04:31.220753       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0731 11:04:31.694369       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0731 11:04:31.909863       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0731 11:04:38.112244       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 11:04:47.000720       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0731 11:04:47.126576       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0731 11:04:50.400071       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0731 11:05:17.342525       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [1da437fa4777] <==
	* I0731 11:04:47.072125       1 shared_informer.go:230] Caches are synced for HPA 
	I0731 11:04:47.124967       1 shared_informer.go:230] Caches are synced for deployment 
	I0731 11:04:47.126921       1 shared_informer.go:230] Caches are synced for resource quota 
	I0731 11:04:47.131001       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"df857e65-c7cc-477c-925f-41d59b24e8c8", APIVersion:"apps/v1", ResourceVersion:"326", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-wnp9c
	I0731 11:04:47.131013       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"9310f6f2-aa6e-4467-9b4a-9d8539ee5ba0", APIVersion:"apps/v1", ResourceVersion:"325", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0731 11:04:47.156032       1 shared_informer.go:230] Caches are synced for disruption 
	I0731 11:04:47.156040       1 disruption.go:339] Sending events to api server.
	I0731 11:04:47.177486       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0731 11:04:47.177495       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0731 11:04:47.187102       1 shared_informer.go:230] Caches are synced for attach detach 
	I0731 11:04:47.191787       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0731 11:04:47.192778       1 shared_informer.go:230] Caches are synced for PV protection 
	I0731 11:04:47.201282       1 shared_informer.go:230] Caches are synced for expand 
	I0731 11:04:47.274552       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0731 11:04:47.475796       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0731 11:04:47.475813       1 shared_informer.go:230] Caches are synced for resource quota 
	I0731 11:04:50.383698       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"0d670853-790a-4680-a879-8f445a3ff148", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0731 11:04:50.390812       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"3bbebd9f-8091-407d-902e-950968df73b4", APIVersion:"apps/v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-wqtht
	I0731 11:04:50.416877       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6398f3dd-9746-4f1c-9596-00bc5fed0085", APIVersion:"batch/v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-bmwxj
	I0731 11:04:50.435121       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6471f067-38fa-46f5-bc7a-f2ad827297b3", APIVersion:"batch/v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-gz6ks
	I0731 11:04:53.266687       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6398f3dd-9746-4f1c-9596-00bc5fed0085", APIVersion:"batch/v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0731 11:04:54.298262       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6471f067-38fa-46f5-bc7a-f2ad827297b3", APIVersion:"batch/v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0731 11:05:28.625462       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"b36599d9-c712-4dc5-bc33-2b233ee44326", APIVersion:"apps/v1", ResourceVersion:"550", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0731 11:05:28.627853       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"0908befa-edfc-433b-a7ee-39994b18fc68", APIVersion:"apps/v1", ResourceVersion:"551", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-cmvt8
	E0731 11:05:53.392359       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-qtkwb" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [69811b1c49b1] <==
	* W0731 11:04:47.620069       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0731 11:04:47.625110       1 node.go:136] Successfully retrieved node IP: 192.168.105.16
	I0731 11:04:47.625132       1 server_others.go:186] Using iptables Proxier.
	I0731 11:04:47.625429       1 server.go:583] Version: v1.18.20
	I0731 11:04:47.629245       1 config.go:133] Starting endpoints config controller
	I0731 11:04:47.629862       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0731 11:04:47.630388       1 config.go:315] Starting service config controller
	I0731 11:04:47.630616       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0731 11:04:47.730473       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0731 11:04:47.730714       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [61eb610e9116] <==
	* W0731 11:04:28.926867       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 11:04:28.926871       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 11:04:28.940022       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0731 11:04:28.940111       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0731 11:04:28.941264       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0731 11:04:28.944364       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 11:04:28.944400       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 11:04:28.944557       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0731 11:04:28.960600       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 11:04:28.960702       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 11:04:28.960767       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 11:04:28.960823       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 11:04:28.960888       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 11:04:28.960930       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 11:04:28.960980       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 11:04:28.961019       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 11:04:28.961070       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 11:04:28.961108       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 11:04:28.961159       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 11:04:28.961356       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 11:04:29.821456       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 11:04:29.851165       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 11:04:29.878829       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 11:04:29.986011       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0731 11:04:30.444591       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-31 11:04:05 UTC, ends at Mon 2023-07-31 11:05:55 UTC. --
	Jul 31 11:05:32 ingress-addon-legacy-464000 kubelet[2605]: E0731 11:05:32.763479    2605 pod_workers.go:191] Error syncing pod 33a838f2-ec03-4c99-b8f3-d29a9da140c8 ("hello-world-app-5f5d8b66bb-cmvt8_default(33a838f2-ec03-4c99-b8f3-d29a9da140c8)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-cmvt8_default(33a838f2-ec03-4c99-b8f3-d29a9da140c8)"
	Jul 31 11:05:33 ingress-addon-legacy-464000 kubelet[2605]: W0731 11:05:33.779889    2605 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-cmvt8 through plugin: invalid network status for
	Jul 31 11:05:33 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:33.784757    2605 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1379fe344484f2e9f9cd1cfe5585c97f8f0cdbf67b80839b49f5186f0e06f551
	Jul 31 11:05:33 ingress-addon-legacy-464000 kubelet[2605]: E0731 11:05:33.785236    2605 pod_workers.go:191] Error syncing pod 33a838f2-ec03-4c99-b8f3-d29a9da140c8 ("hello-world-app-5f5d8b66bb-cmvt8_default(33a838f2-ec03-4c99-b8f3-d29a9da140c8)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-cmvt8_default(33a838f2-ec03-4c99-b8f3-d29a9da140c8)"
	Jul 31 11:05:44 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:44.075730    2605 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-pt8lm" (UniqueName: "kubernetes.io/secret/a1645611-a279-452d-b985-5d18760b7c06-minikube-ingress-dns-token-pt8lm") pod "a1645611-a279-452d-b985-5d18760b7c06" (UID: "a1645611-a279-452d-b985-5d18760b7c06")
	Jul 31 11:05:44 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:44.077443    2605 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1645611-a279-452d-b985-5d18760b7c06-minikube-ingress-dns-token-pt8lm" (OuterVolumeSpecName: "minikube-ingress-dns-token-pt8lm") pod "a1645611-a279-452d-b985-5d18760b7c06" (UID: "a1645611-a279-452d-b985-5d18760b7c06"). InnerVolumeSpecName "minikube-ingress-dns-token-pt8lm". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 11:05:44 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:44.179035    2605 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-pt8lm" (UniqueName: "kubernetes.io/secret/a1645611-a279-452d-b985-5d18760b7c06-minikube-ingress-dns-token-pt8lm") on node "ingress-addon-legacy-464000" DevicePath ""
	Jul 31 11:05:44 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:44.940981    2605 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3ed92f7fc20b8c5a3a1fee8bab266a8a24a49468c0fee326fe1702403ab35695
	Jul 31 11:05:46 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:46.142726    2605 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1379fe344484f2e9f9cd1cfe5585c97f8f0cdbf67b80839b49f5186f0e06f551
	Jul 31 11:05:46 ingress-addon-legacy-464000 kubelet[2605]: W0731 11:05:46.271284    2605 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod33a838f2-ec03-4c99-b8f3-d29a9da140c8/7163e78a77afecc782520576109756a9df3b0d965413cd1356adb83d4abb0f59": none of the resources are being tracked.
	Jul 31 11:05:46 ingress-addon-legacy-464000 kubelet[2605]: W0731 11:05:46.979765    2605 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-cmvt8 through plugin: invalid network status for
	Jul 31 11:05:46 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:46.987264    2605 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1379fe344484f2e9f9cd1cfe5585c97f8f0cdbf67b80839b49f5186f0e06f551
	Jul 31 11:05:46 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:46.987611    2605 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7163e78a77afecc782520576109756a9df3b0d965413cd1356adb83d4abb0f59
	Jul 31 11:05:46 ingress-addon-legacy-464000 kubelet[2605]: E0731 11:05:46.987971    2605 pod_workers.go:191] Error syncing pod 33a838f2-ec03-4c99-b8f3-d29a9da140c8 ("hello-world-app-5f5d8b66bb-cmvt8_default(33a838f2-ec03-4c99-b8f3-d29a9da140c8)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-cmvt8_default(33a838f2-ec03-4c99-b8f3-d29a9da140c8)"
	Jul 31 11:05:48 ingress-addon-legacy-464000 kubelet[2605]: W0731 11:05:48.006680    2605 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-cmvt8 through plugin: invalid network status for
	Jul 31 11:05:48 ingress-addon-legacy-464000 kubelet[2605]: E0731 11:05:48.652266    2605 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-wqtht.1776ef0cdd009c4c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-wqtht", UID:"1c04cf14-ea13-405c-9a45-dc8b92913a9e", APIVersion:"v1", ResourceVersion:"407", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-464000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12a034326c8244c, ext:76982221356, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12a034326c8244c, ext:76982221356, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-wqtht.1776ef0cdd009c4c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 31 11:05:48 ingress-addon-legacy-464000 kubelet[2605]: E0731 11:05:48.663524    2605 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-wqtht.1776ef0cdd009c4c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-wqtht", UID:"1c04cf14-ea13-405c-9a45-dc8b92913a9e", APIVersion:"v1", ResourceVersion:"407", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-464000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12a034326c8244c, ext:76982221356, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12a0343275bcd1b, ext:76991898363, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-wqtht.1776ef0cdd009c4c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 31 11:05:51 ingress-addon-legacy-464000 kubelet[2605]: W0731 11:05:51.055067    2605 pod_container_deletor.go:77] Container "d7d92adba007442963a5456f3a5ae5361b2c8327b9dcbb7f44765dc35d82fcad" not found in pod's containers
	Jul 31 11:05:52 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:52.878357    2605 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1c04cf14-ea13-405c-9a45-dc8b92913a9e-webhook-cert") pod "1c04cf14-ea13-405c-9a45-dc8b92913a9e" (UID: "1c04cf14-ea13-405c-9a45-dc8b92913a9e")
	Jul 31 11:05:52 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:52.879244    2605 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-299lb" (UniqueName: "kubernetes.io/secret/1c04cf14-ea13-405c-9a45-dc8b92913a9e-ingress-nginx-token-299lb") pod "1c04cf14-ea13-405c-9a45-dc8b92913a9e" (UID: "1c04cf14-ea13-405c-9a45-dc8b92913a9e")
	Jul 31 11:05:52 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:52.887949    2605 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c04cf14-ea13-405c-9a45-dc8b92913a9e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1c04cf14-ea13-405c-9a45-dc8b92913a9e" (UID: "1c04cf14-ea13-405c-9a45-dc8b92913a9e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 11:05:52 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:52.888701    2605 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c04cf14-ea13-405c-9a45-dc8b92913a9e-ingress-nginx-token-299lb" (OuterVolumeSpecName: "ingress-nginx-token-299lb") pod "1c04cf14-ea13-405c-9a45-dc8b92913a9e" (UID: "1c04cf14-ea13-405c-9a45-dc8b92913a9e"). InnerVolumeSpecName "ingress-nginx-token-299lb". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 11:05:52 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:52.982058    2605 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1c04cf14-ea13-405c-9a45-dc8b92913a9e-webhook-cert") on node "ingress-addon-legacy-464000" DevicePath ""
	Jul 31 11:05:52 ingress-addon-legacy-464000 kubelet[2605]: I0731 11:05:52.982149    2605 reconciler.go:319] Volume detached for volume "ingress-nginx-token-299lb" (UniqueName: "kubernetes.io/secret/1c04cf14-ea13-405c-9a45-dc8b92913a9e-ingress-nginx-token-299lb") on node "ingress-addon-legacy-464000" DevicePath ""
	Jul 31 11:05:54 ingress-addon-legacy-464000 kubelet[2605]: W0731 11:05:54.156416    2605 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/1c04cf14-ea13-405c-9a45-dc8b92913a9e/volumes" does not exist
	
	* 
	* ==> storage-provisioner [e153b178e670] <==
	* I0731 11:04:50.632953       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 11:04:50.638449       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 11:04:50.638512       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 11:04:50.641165       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 11:04:50.641320       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1fb03d7f-f0f8-4753-a803-c69d819deeda", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-464000_4153d09e-15f3-4cb0-a96b-6aa923d128e3 became leader
	I0731 11:04:50.641334       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-464000_4153d09e-15f3-4cb0-a96b-6aa923d128e3!
	I0731 11:04:50.745147       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-464000_4153d09e-15f3-4cb0-a96b-6aa923d128e3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-464000 -n ingress-addon-legacy-464000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-464000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (47.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-339000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-339000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.14351075s)

                                                
                                                
-- stdout --
	* [mount-start-1-339000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-339000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-339000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-339000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-339000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-339000 -n mount-start-1-339000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-339000 -n mount-start-1-339000: exit status 7 (69.384958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-339000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.21s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-151000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-151000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.671992375s)

                                                
                                                
-- stdout --
	* [multinode-151000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-151000 in cluster multinode-151000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:08:01.965610    6598 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:08:01.965736    6598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:08:01.965739    6598 out.go:309] Setting ErrFile to fd 2...
	I0731 04:08:01.965742    6598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:08:01.965849    6598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:08:01.966880    6598 out.go:303] Setting JSON to false
	I0731 04:08:01.981992    6598 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9452,"bootTime":1690792229,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:08:01.982053    6598 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:08:01.987389    6598 out.go:177] * [multinode-151000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:08:01.995372    6598 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:08:01.999362    6598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:08:01.995432    6598 notify.go:220] Checking for updates...
	I0731 04:08:02.002282    6598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:08:02.005361    6598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:08:02.008373    6598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:08:02.011305    6598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:08:02.014480    6598 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:08:02.018324    6598 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:08:02.025318    6598 start.go:298] selected driver: qemu2
	I0731 04:08:02.025323    6598 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:08:02.025330    6598 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:08:02.027191    6598 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:08:02.030327    6598 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:08:02.033464    6598 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:08:02.033484    6598 cni.go:84] Creating CNI manager for ""
	I0731 04:08:02.033488    6598 cni.go:136] 0 nodes found, recommending kindnet
	I0731 04:08:02.033493    6598 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 04:08:02.033499    6598 start_flags.go:319] config:
	{Name:multinode-151000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-151000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:08:02.038805    6598 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:08:02.046355    6598 out.go:177] * Starting control plane node multinode-151000 in cluster multinode-151000
	I0731 04:08:02.050315    6598 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:08:02.050336    6598 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:08:02.050347    6598 cache.go:57] Caching tarball of preloaded images
	I0731 04:08:02.050403    6598 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:08:02.050410    6598 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:08:02.050603    6598 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/multinode-151000/config.json ...
	I0731 04:08:02.050616    6598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/multinode-151000/config.json: {Name:mk8f3e82767375970125a170f77e6e0675445826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:08:02.050827    6598 start.go:365] acquiring machines lock for multinode-151000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:08:02.050856    6598 start.go:369] acquired machines lock for "multinode-151000" in 23.459µs
	I0731 04:08:02.050866    6598 start.go:93] Provisioning new machine with config: &{Name:multinode-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-
151000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:08:02.050906    6598 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:08:02.059325    6598 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:08:02.075369    6598 start.go:159] libmachine.API.Create for "multinode-151000" (driver="qemu2")
	I0731 04:08:02.075397    6598 client.go:168] LocalClient.Create starting
	I0731 04:08:02.075457    6598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:08:02.075483    6598 main.go:141] libmachine: Decoding PEM data...
	I0731 04:08:02.075496    6598 main.go:141] libmachine: Parsing certificate...
	I0731 04:08:02.075546    6598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:08:02.075560    6598 main.go:141] libmachine: Decoding PEM data...
	I0731 04:08:02.075574    6598 main.go:141] libmachine: Parsing certificate...
	I0731 04:08:02.075890    6598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:08:02.192677    6598 main.go:141] libmachine: Creating SSH key...
	I0731 04:08:02.270322    6598 main.go:141] libmachine: Creating Disk image...
	I0731 04:08:02.270330    6598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:08:02.270467    6598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2
	I0731 04:08:02.279064    6598 main.go:141] libmachine: STDOUT: 
	I0731 04:08:02.279077    6598 main.go:141] libmachine: STDERR: 
	I0731 04:08:02.279137    6598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2 +20000M
	I0731 04:08:02.286268    6598 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:08:02.286288    6598 main.go:141] libmachine: STDERR: 
	I0731 04:08:02.286302    6598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2
	I0731 04:08:02.286306    6598 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:08:02.286338    6598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:48:ee:d3:73:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2
	I0731 04:08:02.287925    6598 main.go:141] libmachine: STDOUT: 
	I0731 04:08:02.287939    6598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:08:02.287957    6598 client.go:171] LocalClient.Create took 212.558708ms
	I0731 04:08:04.290082    6598 start.go:128] duration metric: createHost completed in 2.239210625s
	I0731 04:08:04.290145    6598 start.go:83] releasing machines lock for "multinode-151000", held for 2.23933s
	W0731 04:08:04.290241    6598 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:08:04.301515    6598 out.go:177] * Deleting "multinode-151000" in qemu2 ...
	W0731 04:08:04.323515    6598 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:08:04.323538    6598 start.go:687] Will try again in 5 seconds ...
	I0731 04:08:09.325699    6598 start.go:365] acquiring machines lock for multinode-151000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:08:09.326095    6598 start.go:369] acquired machines lock for "multinode-151000" in 323µs
	I0731 04:08:09.326200    6598 start.go:93] Provisioning new machine with config: &{Name:multinode-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-
151000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:08:09.326448    6598 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:08:09.336127    6598 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:08:09.383368    6598 start.go:159] libmachine.API.Create for "multinode-151000" (driver="qemu2")
	I0731 04:08:09.383402    6598 client.go:168] LocalClient.Create starting
	I0731 04:08:09.383538    6598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:08:09.383594    6598 main.go:141] libmachine: Decoding PEM data...
	I0731 04:08:09.383625    6598 main.go:141] libmachine: Parsing certificate...
	I0731 04:08:09.383705    6598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:08:09.383736    6598 main.go:141] libmachine: Decoding PEM data...
	I0731 04:08:09.383750    6598 main.go:141] libmachine: Parsing certificate...
	I0731 04:08:09.384247    6598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:08:09.515882    6598 main.go:141] libmachine: Creating SSH key...
	I0731 04:08:09.545624    6598 main.go:141] libmachine: Creating Disk image...
	I0731 04:08:09.545654    6598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:08:09.545800    6598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2
	I0731 04:08:09.554301    6598 main.go:141] libmachine: STDOUT: 
	I0731 04:08:09.554316    6598 main.go:141] libmachine: STDERR: 
	I0731 04:08:09.554368    6598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2 +20000M
	I0731 04:08:09.561527    6598 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:08:09.561542    6598 main.go:141] libmachine: STDERR: 
	I0731 04:08:09.561553    6598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2
	I0731 04:08:09.561559    6598 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:08:09.561593    6598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:23:3b:af:0a:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2
	I0731 04:08:09.563110    6598 main.go:141] libmachine: STDOUT: 
	I0731 04:08:09.563125    6598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:08:09.563144    6598 client.go:171] LocalClient.Create took 179.733666ms
	I0731 04:08:11.565294    6598 start.go:128] duration metric: createHost completed in 2.238861916s
	I0731 04:08:11.565384    6598 start.go:83] releasing machines lock for "multinode-151000", held for 2.239317917s
	W0731 04:08:11.565919    6598 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:08:11.579489    6598 out.go:177] 
	W0731 04:08:11.583740    6598 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:08:11.583766    6598 out.go:239] * 
	* 
	W0731 04:08:11.586482    6598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:08:11.596622    6598 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-151000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000: exit status 7 (67.15ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (93.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (126.234042ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-151000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- rollout status deployment/busybox: exit status 1 (54.983458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (54.918542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.710459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.599959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.679333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0731 04:08:18.977634    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.024ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.566417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.899792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.945ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.101792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0731 04:09:40.898387    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.390959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.373458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- exec  -- nslookup kubernetes.io: exit status 1 (54.345125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- exec  -- nslookup kubernetes.default: exit status 1 (53.55325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (53.788917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000: exit status 7 (28.958125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (93.65s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.509ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000: exit status 7 (28.451458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-151000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-151000 -v 3 --alsologtostderr: exit status 89 (40.147791ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-151000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:09:45.433993    6699 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:09:45.434337    6699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:45.434344    6699 out.go:309] Setting ErrFile to fd 2...
	I0731 04:09:45.434346    6699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:45.434484    6699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:09:45.434685    6699 mustload.go:65] Loading cluster: multinode-151000
	I0731 04:09:45.434852    6699 config.go:182] Loaded profile config "multinode-151000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:09:45.439096    6699 out.go:177] * The control plane node must be running for this command
	I0731 04:09:45.443130    6699 out.go:177]   To start a cluster, run: "minikube start -p multinode-151000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-151000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000: exit status 7 (28.437542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-151000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-151000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-151000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"Do
ckerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.27.3\",\"ClusterName\":\"multinode-151000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1
.27.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHA
uthSock\":\"\",\"SSHAgentPID\":0},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000: exit status 7 (33.277125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-151000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-151000 status --output json --alsologtostderr: exit status 7 (29.152208ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-151000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:09:45.659778    6709 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:09:45.659928    6709 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:45.659931    6709 out.go:309] Setting ErrFile to fd 2...
	I0731 04:09:45.659933    6709 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:45.660065    6709 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:09:45.660190    6709 out.go:303] Setting JSON to true
	I0731 04:09:45.660201    6709 mustload.go:65] Loading cluster: multinode-151000
	I0731 04:09:45.660264    6709 notify.go:220] Checking for updates...
	I0731 04:09:45.660378    6709 config.go:182] Loaded profile config "multinode-151000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:09:45.660386    6709 status.go:255] checking status of multinode-151000 ...
	I0731 04:09:45.660582    6709 status.go:330] multinode-151000 host status = "Stopped" (err=<nil>)
	I0731 04:09:45.660586    6709 status.go:343] host is not running, skipping remaining checks
	I0731 04:09:45.660588    6709 status.go:257] multinode-151000 status: &{Name:multinode-151000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-151000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000: exit status 7 (28.48875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-151000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-151000 node stop m03: exit status 85 (43.826333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-151000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-151000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-151000 status: exit status 7 (28.635958ms)

                                                
                                                
-- stdout --
	multinode-151000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-151000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-151000 status --alsologtostderr: exit status 7 (28.378083ms)

                                                
                                                
-- stdout --
	multinode-151000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:09:45.790131    6717 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:09:45.790273    6717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:45.790276    6717 out.go:309] Setting ErrFile to fd 2...
	I0731 04:09:45.790278    6717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:45.790385    6717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:09:45.790498    6717 out.go:303] Setting JSON to false
	I0731 04:09:45.790517    6717 mustload.go:65] Loading cluster: multinode-151000
	I0731 04:09:45.790571    6717 notify.go:220] Checking for updates...
	I0731 04:09:45.790695    6717 config.go:182] Loaded profile config "multinode-151000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:09:45.790704    6717 status.go:255] checking status of multinode-151000 ...
	I0731 04:09:45.790899    6717 status.go:330] multinode-151000 host status = "Stopped" (err=<nil>)
	I0731 04:09:45.790906    6717 status.go:343] host is not running, skipping remaining checks
	I0731 04:09:45.790908    6717 status.go:257] multinode-151000 status: &{Name:multinode-151000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-151000 status --alsologtostderr": multinode-151000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000: exit status 7 (28.287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-151000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-151000 node start m03 --alsologtostderr: exit status 85 (46.180375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:09:45.847025    6721 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:09:45.847437    6721 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:45.847441    6721 out.go:309] Setting ErrFile to fd 2...
	I0731 04:09:45.847443    6721 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:45.847585    6721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:09:45.847816    6721 mustload.go:65] Loading cluster: multinode-151000
	I0731 04:09:45.847981    6721 config.go:182] Loaded profile config "multinode-151000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:09:45.851722    6721 out.go:177] 
	W0731 04:09:45.855730    6721 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0731 04:09:45.855735    6721 out.go:239] * 
	* 
	W0731 04:09:45.857563    6721 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:09:45.861608    6721 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0731 04:09:45.847025    6721 out.go:296] Setting OutFile to fd 1 ...
I0731 04:09:45.847437    6721 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 04:09:45.847441    6721 out.go:309] Setting ErrFile to fd 2...
I0731 04:09:45.847443    6721 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 04:09:45.847585    6721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
I0731 04:09:45.847816    6721 mustload.go:65] Loading cluster: multinode-151000
I0731 04:09:45.847981    6721 config.go:182] Loaded profile config "multinode-151000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0731 04:09:45.851722    6721 out.go:177] 
W0731 04:09:45.855730    6721 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0731 04:09:45.855735    6721 out.go:239] * 
* 
W0731 04:09:45.857563    6721 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 04:09:45.861608    6721 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-151000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-151000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-151000 status: exit status 7 (28.687625ms)

                                                
                                                
-- stdout --
	multinode-151000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-151000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000: exit status 7 (28.698791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-151000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-151000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-151000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-151000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.1854585s)

                                                
                                                
-- stdout --
	* [multinode-151000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-151000 in cluster multinode-151000
	* Restarting existing qemu2 VM for "multinode-151000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-151000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:09:46.039478    6731 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:09:46.039600    6731 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:46.039602    6731 out.go:309] Setting ErrFile to fd 2...
	I0731 04:09:46.039605    6731 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:46.039710    6731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:09:46.040684    6731 out.go:303] Setting JSON to false
	I0731 04:09:46.055950    6731 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9557,"bootTime":1690792229,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:09:46.056027    6731 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:09:46.064707    6731 out.go:177] * [multinode-151000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:09:46.068715    6731 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:09:46.072710    6731 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:09:46.068777    6731 notify.go:220] Checking for updates...
	I0731 04:09:46.083140    6731 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:09:46.086742    6731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:09:46.089622    6731 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:09:46.092668    6731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:09:46.096018    6731 config.go:182] Loaded profile config "multinode-151000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:09:46.096081    6731 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:09:46.100674    6731 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 04:09:46.107785    6731 start.go:298] selected driver: qemu2
	I0731 04:09:46.107791    6731 start.go:898] validating driver "qemu2" against &{Name:multinode-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-151
000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:09:46.107869    6731 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:09:46.109774    6731 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:09:46.109799    6731 cni.go:84] Creating CNI manager for ""
	I0731 04:09:46.109803    6731 cni.go:136] 1 nodes found, recommending kindnet
	I0731 04:09:46.109808    6731 start_flags.go:319] config:
	{Name:multinode-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-151000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:09:46.113901    6731 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:09:46.120680    6731 out.go:177] * Starting control plane node multinode-151000 in cluster multinode-151000
	I0731 04:09:46.124637    6731 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:09:46.124664    6731 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:09:46.124672    6731 cache.go:57] Caching tarball of preloaded images
	I0731 04:09:46.124742    6731 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:09:46.124748    6731 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:09:46.124826    6731 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/multinode-151000/config.json ...
	I0731 04:09:46.125199    6731 start.go:365] acquiring machines lock for multinode-151000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:09:46.125233    6731 start.go:369] acquired machines lock for "multinode-151000" in 27.792µs
	I0731 04:09:46.125243    6731 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:09:46.125248    6731 fix.go:54] fixHost starting: 
	I0731 04:09:46.125384    6731 fix.go:102] recreateIfNeeded on multinode-151000: state=Stopped err=<nil>
	W0731 04:09:46.125396    6731 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:09:46.132700    6731 out.go:177] * Restarting existing qemu2 VM for "multinode-151000" ...
	I0731 04:09:46.136704    6731 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:23:3b:af:0a:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2
	I0731 04:09:46.138777    6731 main.go:141] libmachine: STDOUT: 
	I0731 04:09:46.138801    6731 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:09:46.138828    6731 fix.go:56] fixHost completed within 13.580417ms
	I0731 04:09:46.138833    6731 start.go:83] releasing machines lock for "multinode-151000", held for 13.59525ms
	W0731 04:09:46.138841    6731 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:09:46.138881    6731 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:09:46.138887    6731 start.go:687] Will try again in 5 seconds ...
	I0731 04:09:51.140958    6731 start.go:365] acquiring machines lock for multinode-151000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:09:51.141452    6731 start.go:369] acquired machines lock for "multinode-151000" in 388.917µs
	I0731 04:09:51.141596    6731 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:09:51.141615    6731 fix.go:54] fixHost starting: 
	I0731 04:09:51.142381    6731 fix.go:102] recreateIfNeeded on multinode-151000: state=Stopped err=<nil>
	W0731 04:09:51.142409    6731 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:09:51.146770    6731 out.go:177] * Restarting existing qemu2 VM for "multinode-151000" ...
	I0731 04:09:51.153894    6731 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:23:3b:af:0a:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2
	I0731 04:09:51.163786    6731 main.go:141] libmachine: STDOUT: 
	I0731 04:09:51.163835    6731 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:09:51.163913    6731 fix.go:56] fixHost completed within 22.29725ms
	I0731 04:09:51.163929    6731 start.go:83] releasing machines lock for "multinode-151000", held for 22.455667ms
	W0731 04:09:51.164121    6731 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-151000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-151000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:09:51.171753    6731 out.go:177] 
	W0731 04:09:51.174945    6731 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:09:51.175012    6731 out.go:239] * 
	* 
	W0731 04:09:51.177807    6731 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:09:51.185768    6731 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-151000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-151000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000: exit status 7 (33.474375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-151000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-151000 node delete m03: exit status 89 (38.505792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-151000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-151000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-151000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-151000 status --alsologtostderr: exit status 7 (28.481708ms)

                                                
                                                
-- stdout --
	multinode-151000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:09:51.370584    6746 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:09:51.370706    6746 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:51.370714    6746 out.go:309] Setting ErrFile to fd 2...
	I0731 04:09:51.370717    6746 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:51.370821    6746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:09:51.370921    6746 out.go:303] Setting JSON to false
	I0731 04:09:51.370931    6746 mustload.go:65] Loading cluster: multinode-151000
	I0731 04:09:51.371000    6746 notify.go:220] Checking for updates...
	I0731 04:09:51.371114    6746 config.go:182] Loaded profile config "multinode-151000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:09:51.371120    6746 status.go:255] checking status of multinode-151000 ...
	I0731 04:09:51.371304    6746 status.go:330] multinode-151000 host status = "Stopped" (err=<nil>)
	I0731 04:09:51.371308    6746 status.go:343] host is not running, skipping remaining checks
	I0731 04:09:51.371310    6746 status.go:257] multinode-151000 status: &{Name:multinode-151000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-151000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000: exit status 7 (28.797875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-151000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-151000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-151000 status: exit status 7 (29.065083ms)

                                                
                                                
-- stdout --
	multinode-151000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-151000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-151000 status --alsologtostderr: exit status 7 (28.485584ms)

                                                
                                                
-- stdout --
	multinode-151000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:09:51.516576    6754 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:09:51.516693    6754 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:51.516696    6754 out.go:309] Setting ErrFile to fd 2...
	I0731 04:09:51.516699    6754 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:51.516815    6754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:09:51.516922    6754 out.go:303] Setting JSON to false
	I0731 04:09:51.516934    6754 mustload.go:65] Loading cluster: multinode-151000
	I0731 04:09:51.517000    6754 notify.go:220] Checking for updates...
	I0731 04:09:51.517125    6754 config.go:182] Loaded profile config "multinode-151000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:09:51.517129    6754 status.go:255] checking status of multinode-151000 ...
	I0731 04:09:51.517304    6754 status.go:330] multinode-151000 host status = "Stopped" (err=<nil>)
	I0731 04:09:51.517307    6754 status.go:343] host is not running, skipping remaining checks
	I0731 04:09:51.517309    6754 status.go:257] multinode-151000 status: &{Name:multinode-151000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-151000 status --alsologtostderr": multinode-151000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-151000 status --alsologtostderr": multinode-151000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000: exit status 7 (28.454625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-151000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-151000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.178857584s)

                                                
                                                
-- stdout --
	* [multinode-151000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-151000 in cluster multinode-151000
	* Restarting existing qemu2 VM for "multinode-151000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-151000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:09:51.573506    6758 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:09:51.573612    6758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:51.573615    6758 out.go:309] Setting ErrFile to fd 2...
	I0731 04:09:51.573617    6758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:09:51.573726    6758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:09:51.574724    6758 out.go:303] Setting JSON to false
	I0731 04:09:51.589946    6758 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9562,"bootTime":1690792229,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:09:51.590028    6758 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:09:51.594676    6758 out.go:177] * [multinode-151000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:09:51.601663    6758 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:09:51.605647    6758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:09:51.601726    6758 notify.go:220] Checking for updates...
	I0731 04:09:51.612598    6758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:09:51.615633    6758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:09:51.618662    6758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:09:51.621623    6758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:09:51.624944    6758 config.go:182] Loaded profile config "multinode-151000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:09:51.625196    6758 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:09:51.629573    6758 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 04:09:51.636629    6758 start.go:298] selected driver: qemu2
	I0731 04:09:51.636633    6758 start.go:898] validating driver "qemu2" against &{Name:multinode-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-151
000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:09:51.636703    6758 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:09:51.638644    6758 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:09:51.638671    6758 cni.go:84] Creating CNI manager for ""
	I0731 04:09:51.638675    6758 cni.go:136] 1 nodes found, recommending kindnet
	I0731 04:09:51.638681    6758 start_flags.go:319] config:
	{Name:multinode-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-151000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:09:51.642691    6758 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:09:51.649584    6758 out.go:177] * Starting control plane node multinode-151000 in cluster multinode-151000
	I0731 04:09:51.652573    6758 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:09:51.652596    6758 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:09:51.652609    6758 cache.go:57] Caching tarball of preloaded images
	I0731 04:09:51.652674    6758 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:09:51.652679    6758 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:09:51.652766    6758 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/multinode-151000/config.json ...
	I0731 04:09:51.653143    6758 start.go:365] acquiring machines lock for multinode-151000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:09:51.653173    6758 start.go:369] acquired machines lock for "multinode-151000" in 24.167µs
	I0731 04:09:51.653182    6758 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:09:51.653187    6758 fix.go:54] fixHost starting: 
	I0731 04:09:51.653308    6758 fix.go:102] recreateIfNeeded on multinode-151000: state=Stopped err=<nil>
	W0731 04:09:51.653317    6758 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:09:51.660394    6758 out.go:177] * Restarting existing qemu2 VM for "multinode-151000" ...
	I0731 04:09:51.664634    6758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:23:3b:af:0a:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2
	I0731 04:09:51.666355    6758 main.go:141] libmachine: STDOUT: 
	I0731 04:09:51.666370    6758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:09:51.666393    6758 fix.go:56] fixHost completed within 13.206958ms
	I0731 04:09:51.666398    6758 start.go:83] releasing machines lock for "multinode-151000", held for 13.221417ms
	W0731 04:09:51.666406    6758 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:09:51.666445    6758 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:09:51.666450    6758 start.go:687] Will try again in 5 seconds ...
	I0731 04:09:56.668530    6758 start.go:365] acquiring machines lock for multinode-151000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:09:56.668914    6758 start.go:369] acquired machines lock for "multinode-151000" in 307.417µs
	I0731 04:09:56.669052    6758 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:09:56.669073    6758 fix.go:54] fixHost starting: 
	I0731 04:09:56.669774    6758 fix.go:102] recreateIfNeeded on multinode-151000: state=Stopped err=<nil>
	W0731 04:09:56.669801    6758 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:09:56.674195    6758 out.go:177] * Restarting existing qemu2 VM for "multinode-151000" ...
	I0731 04:09:56.682294    6758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:23:3b:af:0a:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/multinode-151000/disk.qcow2
	I0731 04:09:56.691679    6758 main.go:141] libmachine: STDOUT: 
	I0731 04:09:56.691731    6758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:09:56.691798    6758 fix.go:56] fixHost completed within 22.726916ms
	I0731 04:09:56.691818    6758 start.go:83] releasing machines lock for "multinode-151000", held for 22.883375ms
	W0731 04:09:56.692010    6758 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-151000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-151000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:09:56.699087    6758 out.go:177] 
	W0731 04:09:56.703271    6758 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:09:56.703321    6758 out.go:239] * 
	* 
	W0731 04:09:56.706025    6758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:09:56.713196    6758 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-151000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000: exit status 7 (66.9085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-151000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-151000-m01 --driver=qemu2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-151000-m01 --driver=qemu2 : exit status 80 (9.734943083s)

                                                
                                                
-- stdout --
	* [multinode-151000-m01] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-151000-m01 in cluster multinode-151000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-151000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-151000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-151000-m02 --driver=qemu2 
E0731 04:10:08.194981    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
E0731 04:10:08.201383    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
E0731 04:10:08.213560    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
E0731 04:10:08.235643    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
E0731 04:10:08.277716    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
E0731 04:10:08.359814    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
E0731 04:10:08.521920    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
E0731 04:10:08.844078    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
E0731 04:10:09.485106    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
E0731 04:10:10.767494    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
E0731 04:10:13.329750    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-151000-m02 --driver=qemu2 : exit status 80 (9.710252792s)

                                                
                                                
-- stdout --
	* [multinode-151000-m02] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-151000-m02 in cluster multinode-151000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-151000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-151000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-151000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-151000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-151000: exit status 89 (79.8225ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-151000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-151000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-151000 -n multinode-151000: exit status 7 (33.129791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.73s)

                                                
                                    
x
+
TestPreload (9.86s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-608000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0731 04:10:18.449934    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-608000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.6923125s)

                                                
                                                
-- stdout --
	* [test-preload-608000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-608000 in cluster test-preload-608000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-608000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:10:16.684054    6812 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:10:16.684181    6812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:10:16.684183    6812 out.go:309] Setting ErrFile to fd 2...
	I0731 04:10:16.684186    6812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:10:16.684318    6812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:10:16.685399    6812 out.go:303] Setting JSON to false
	I0731 04:10:16.700753    6812 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9587,"bootTime":1690792229,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:10:16.700813    6812 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:10:16.704675    6812 out.go:177] * [test-preload-608000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:10:16.712707    6812 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:10:16.712781    6812 notify.go:220] Checking for updates...
	I0731 04:10:16.716713    6812 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:10:16.719727    6812 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:10:16.722785    6812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:10:16.725724    6812 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:10:16.728737    6812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:10:16.732032    6812 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:10:16.732072    6812 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:10:16.736583    6812 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:10:16.743692    6812 start.go:298] selected driver: qemu2
	I0731 04:10:16.743701    6812 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:10:16.743708    6812 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:10:16.745519    6812 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:10:16.748578    6812 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:10:16.751768    6812 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:10:16.751784    6812 cni.go:84] Creating CNI manager for ""
	I0731 04:10:16.751790    6812 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:10:16.751795    6812 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:10:16.751801    6812 start_flags.go:319] config:
	{Name:test-preload-608000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-608000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:10:16.756020    6812 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:10:16.762754    6812 out.go:177] * Starting control plane node test-preload-608000 in cluster test-preload-608000
	I0731 04:10:16.766687    6812 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0731 04:10:16.766774    6812 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/test-preload-608000/config.json ...
	I0731 04:10:16.766794    6812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/test-preload-608000/config.json: {Name:mkf4f4fa015a6b58232ba7e6938c3e7da7e631c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:10:16.766794    6812 cache.go:107] acquiring lock: {Name:mkd965e87299c119b23fe0eb0b9d8acc1778f75e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:10:16.766812    6812 cache.go:107] acquiring lock: {Name:mk131e0de69d94bbe8b145b10ed08d1460d6b04f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:10:16.766832    6812 cache.go:107] acquiring lock: {Name:mk262c52919a011e3c4e313a60a439a54809058a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:10:16.767025    6812 cache.go:107] acquiring lock: {Name:mke947b6a1e7f92f232204e0a6ce20452988ff6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:10:16.767043    6812 cache.go:107] acquiring lock: {Name:mk965dbb6375342ad1f0d0d2e77817ebab05b9a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:10:16.767058    6812 cache.go:107] acquiring lock: {Name:mk974523a20f586f9a7475c0b1a05cde7fdcbe7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:10:16.767052    6812 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 04:10:16.767050    6812 cache.go:107] acquiring lock: {Name:mk5b2886a8271f2d7669ea3812d3ab6cc67c8646 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:10:16.767026    6812 cache.go:107] acquiring lock: {Name:mk57892f4585eedbbf8c51df547d22445dda916e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:10:16.767210    6812 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 04:10:16.767221    6812 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 04:10:16.767056    6812 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 04:10:16.767035    6812 start.go:365] acquiring machines lock for test-preload-608000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:10:16.767221    6812 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 04:10:16.767139    6812 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 04:10:16.767358    6812 start.go:369] acquired machines lock for "test-preload-608000" in 92.417µs
	I0731 04:10:16.767398    6812 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 04:10:16.767374    6812 start.go:93] Provisioning new machine with config: &{Name:test-preload-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-pr
eload-608000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:10:16.767421    6812 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:10:16.767421    6812 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 04:10:16.775676    6812 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:10:16.780950    6812 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 04:10:16.782850    6812 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 04:10:16.782969    6812 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 04:10:16.785690    6812 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 04:10:16.785787    6812 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 04:10:16.785811    6812 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 04:10:16.785879    6812 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 04:10:16.785928    6812 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 04:10:16.792050    6812 start.go:159] libmachine.API.Create for "test-preload-608000" (driver="qemu2")
	I0731 04:10:16.792073    6812 client.go:168] LocalClient.Create starting
	I0731 04:10:16.792144    6812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:10:16.792164    6812 main.go:141] libmachine: Decoding PEM data...
	I0731 04:10:16.792177    6812 main.go:141] libmachine: Parsing certificate...
	I0731 04:10:16.792225    6812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:10:16.792241    6812 main.go:141] libmachine: Decoding PEM data...
	I0731 04:10:16.792247    6812 main.go:141] libmachine: Parsing certificate...
	I0731 04:10:16.792574    6812 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:10:16.912583    6812 main.go:141] libmachine: Creating SSH key...
	I0731 04:10:17.007687    6812 main.go:141] libmachine: Creating Disk image...
	I0731 04:10:17.007698    6812 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:10:17.007850    6812 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/disk.qcow2
	I0731 04:10:17.017181    6812 main.go:141] libmachine: STDOUT: 
	I0731 04:10:17.017205    6812 main.go:141] libmachine: STDERR: 
	I0731 04:10:17.017267    6812 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/disk.qcow2 +20000M
	I0731 04:10:17.025154    6812 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:10:17.025168    6812 main.go:141] libmachine: STDERR: 
	I0731 04:10:17.025183    6812 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/disk.qcow2
	I0731 04:10:17.025189    6812 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:10:17.025243    6812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:84:4e:64:0c:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/disk.qcow2
	I0731 04:10:17.026977    6812 main.go:141] libmachine: STDOUT: 
	I0731 04:10:17.027005    6812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:10:17.027023    6812 client.go:171] LocalClient.Create took 234.9495ms
	I0731 04:10:18.079516    6812 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 04:10:18.090807    6812 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0731 04:10:18.133516    6812 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0731 04:10:18.203584    6812 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0731 04:10:18.203603    6812 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.436627s
	I0731 04:10:18.203610    6812 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0731 04:10:18.277431    6812 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0731 04:10:18.346738    6812 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 04:10:18.346776    6812 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 04:10:18.784678    6812 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0731 04:10:18.948946    6812 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 04:10:18.949050    6812 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 04:10:19.003167    6812 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0731 04:10:19.027576    6812 start.go:128] duration metric: createHost completed in 2.26019275s
	I0731 04:10:19.027615    6812 start.go:83] releasing machines lock for "test-preload-608000", held for 2.260296583s
	W0731 04:10:19.027675    6812 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:10:19.038333    6812 out.go:177] * Deleting "test-preload-608000" in qemu2 ...
	W0731 04:10:19.058766    6812 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:10:19.058811    6812 start.go:687] Will try again in 5 seconds ...
	I0731 04:10:19.684234    6812 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 04:10:19.684286    6812 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.9175575s
	I0731 04:10:19.684338    6812 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 04:10:20.149856    6812 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0731 04:10:20.149903    6812 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.38302125s
	I0731 04:10:20.149929    6812 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0731 04:10:21.205285    6812 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0731 04:10:21.205335    6812 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.438380666s
	I0731 04:10:21.205390    6812 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0731 04:10:21.679862    6812 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0731 04:10:21.679909    6812 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.91322675s
	I0731 04:10:21.679965    6812 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0731 04:10:23.376639    6812 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0731 04:10:23.376687    6812 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.610025042s
	I0731 04:10:23.376713    6812 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0731 04:10:23.823475    6812 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0731 04:10:23.823517    6812 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 7.056736541s
	I0731 04:10:23.823567    6812 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0731 04:10:24.058941    6812 start.go:365] acquiring machines lock for test-preload-608000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:10:24.059354    6812 start.go:369] acquired machines lock for "test-preload-608000" in 359.334µs
	I0731 04:10:24.059431    6812 start.go:93] Provisioning new machine with config: &{Name:test-preload-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-pr
eload-608000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:10:24.059699    6812 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:10:24.070257    6812 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:10:24.117326    6812 start.go:159] libmachine.API.Create for "test-preload-608000" (driver="qemu2")
	I0731 04:10:24.117358    6812 client.go:168] LocalClient.Create starting
	I0731 04:10:24.117576    6812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:10:24.117637    6812 main.go:141] libmachine: Decoding PEM data...
	I0731 04:10:24.117665    6812 main.go:141] libmachine: Parsing certificate...
	I0731 04:10:24.117738    6812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:10:24.117766    6812 main.go:141] libmachine: Decoding PEM data...
	I0731 04:10:24.117779    6812 main.go:141] libmachine: Parsing certificate...
	I0731 04:10:24.118290    6812 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:10:24.249146    6812 main.go:141] libmachine: Creating SSH key...
	I0731 04:10:24.289231    6812 main.go:141] libmachine: Creating Disk image...
	I0731 04:10:24.289237    6812 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:10:24.289387    6812 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/disk.qcow2
	I0731 04:10:24.297974    6812 main.go:141] libmachine: STDOUT: 
	I0731 04:10:24.297992    6812 main.go:141] libmachine: STDERR: 
	I0731 04:10:24.298057    6812 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/disk.qcow2 +20000M
	I0731 04:10:24.305475    6812 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:10:24.305488    6812 main.go:141] libmachine: STDERR: 
	I0731 04:10:24.305505    6812 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/disk.qcow2
	I0731 04:10:24.305509    6812 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:10:24.305550    6812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:c1:87:f0:4b:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/test-preload-608000/disk.qcow2
	I0731 04:10:24.307148    6812 main.go:141] libmachine: STDOUT: 
	I0731 04:10:24.307163    6812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:10:24.307176    6812 client.go:171] LocalClient.Create took 189.816875ms
	I0731 04:10:25.518589    6812 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0731 04:10:25.518649    6812 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.751891208s
	I0731 04:10:25.518680    6812 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0731 04:10:25.518722    6812 cache.go:87] Successfully saved all images to host disk.
	I0731 04:10:26.309251    6812 start.go:128] duration metric: createHost completed in 2.249580666s
	I0731 04:10:26.309302    6812 start.go:83] releasing machines lock for "test-preload-608000", held for 2.249977667s
	W0731 04:10:26.309543    6812 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-608000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-608000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:10:26.319133    6812 out.go:177] 
	W0731 04:10:26.323063    6812 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:10:26.323088    6812 out.go:239] * 
	* 
	W0731 04:10:26.325675    6812 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:10:26.336099    6812 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-608000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-07-31 04:10:26.352167 -0700 PDT m=+1009.679358376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-608000 -n test-preload-608000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-608000 -n test-preload-608000: exit status 7 (68.71725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-608000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-608000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-608000
--- FAIL: TestPreload (9.86s)

                                                
                                    
x
+
TestScheduledStopUnix (9.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-898000 --memory=2048 --driver=qemu2 
E0731 04:10:28.701635    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-898000 --memory=2048 --driver=qemu2 : exit status 80 (9.814030833s)

                                                
                                                
-- stdout --
	* [scheduled-stop-898000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-898000 in cluster scheduled-stop-898000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-898000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-898000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-898000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-898000 in cluster scheduled-stop-898000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-898000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-898000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-07-31 04:10:36.335513 -0700 PDT m=+1019.662932043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-898000 -n scheduled-stop-898000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-898000 -n scheduled-stop-898000: exit status 7 (69.936583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-898000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-898000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-898000
--- FAIL: TestScheduledStopUnix (9.98s)

                                                
                                    
x
+
TestSkaffold (14.13s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2450120720 version
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-403000 --memory=2600 --driver=qemu2 
E0731 04:10:49.183909    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-403000 --memory=2600 --driver=qemu2 : exit status 80 (9.879779208s)

                                                
                                                
-- stdout --
	* [skaffold-403000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-403000 in cluster skaffold-403000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-403000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-403000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-403000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-403000 in cluster skaffold-403000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-403000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-403000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-07-31 04:10:50.462029 -0700 PDT m=+1033.789768376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-403000 -n skaffold-403000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-403000 -n skaffold-403000: exit status 7 (63.473583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-403000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-403000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-403000
--- FAIL: TestSkaffold (14.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (127.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0731 04:11:57.032828    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:12:24.737164    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:12:52.066311    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-31 04:13:38.561699 -0700 PDT m=+1201.893260501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-143000 -n running-upgrade-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-143000 -n running-upgrade-143000: exit status 85 (84.987291ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-143000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-143000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-143000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-143000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-143000\"")
helpers_test.go:175: Cleaning up "running-upgrade-143000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-143000
--- FAIL: TestRunningBinaryUpgrade (127.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-714000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-714000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.810279375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-714000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-714000 in cluster kubernetes-upgrade-714000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:13:38.960863    7306 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:13:38.960980    7306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:13:38.960983    7306 out.go:309] Setting ErrFile to fd 2...
	I0731 04:13:38.960985    7306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:13:38.961105    7306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:13:38.962135    7306 out.go:303] Setting JSON to false
	I0731 04:13:38.977328    7306 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9789,"bootTime":1690792229,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:13:38.977393    7306 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:13:38.981957    7306 out.go:177] * [kubernetes-upgrade-714000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:13:38.989913    7306 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:13:38.989966    7306 notify.go:220] Checking for updates...
	I0731 04:13:38.993945    7306 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:13:38.996843    7306 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:13:38.999982    7306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:13:39.002969    7306 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:13:39.005943    7306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:13:39.009306    7306 config.go:182] Loaded profile config "cert-expiration-468000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:13:39.009371    7306 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:13:39.009409    7306 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:13:39.012961    7306 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:13:39.019905    7306 start.go:298] selected driver: qemu2
	I0731 04:13:39.019913    7306 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:13:39.019920    7306 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:13:39.021801    7306 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:13:39.024936    7306 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:13:39.026390    7306 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 04:13:39.026404    7306 cni.go:84] Creating CNI manager for ""
	I0731 04:13:39.026409    7306 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 04:13:39.026413    7306 start_flags.go:319] config:
	{Name:kubernetes-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-714000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:13:39.030564    7306 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:13:39.037910    7306 out.go:177] * Starting control plane node kubernetes-upgrade-714000 in cluster kubernetes-upgrade-714000
	I0731 04:13:39.041855    7306 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0731 04:13:39.041880    7306 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0731 04:13:39.041890    7306 cache.go:57] Caching tarball of preloaded images
	I0731 04:13:39.041957    7306 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:13:39.041962    7306 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0731 04:13:39.042039    7306 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/kubernetes-upgrade-714000/config.json ...
	I0731 04:13:39.042050    7306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/kubernetes-upgrade-714000/config.json: {Name:mkcf13b59d172427f31f6aa086339d9f12eb577e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:13:39.042249    7306 start.go:365] acquiring machines lock for kubernetes-upgrade-714000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:13:39.042280    7306 start.go:369] acquired machines lock for "kubernetes-upgrade-714000" in 23.708µs
	I0731 04:13:39.042290    7306 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:k
ubernetes-upgrade-714000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:13:39.042316    7306 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:13:39.046947    7306 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:13:39.062870    7306 start.go:159] libmachine.API.Create for "kubernetes-upgrade-714000" (driver="qemu2")
	I0731 04:13:39.062885    7306 client.go:168] LocalClient.Create starting
	I0731 04:13:39.062941    7306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:13:39.062962    7306 main.go:141] libmachine: Decoding PEM data...
	I0731 04:13:39.062970    7306 main.go:141] libmachine: Parsing certificate...
	I0731 04:13:39.063000    7306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:13:39.063013    7306 main.go:141] libmachine: Decoding PEM data...
	I0731 04:13:39.063021    7306 main.go:141] libmachine: Parsing certificate...
	I0731 04:13:39.063324    7306 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:13:39.185094    7306 main.go:141] libmachine: Creating SSH key...
	I0731 04:13:39.330818    7306 main.go:141] libmachine: Creating Disk image...
	I0731 04:13:39.330824    7306 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:13:39.330974    7306 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2
	I0731 04:13:39.340404    7306 main.go:141] libmachine: STDOUT: 
	I0731 04:13:39.340424    7306 main.go:141] libmachine: STDERR: 
	I0731 04:13:39.340480    7306 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2 +20000M
	I0731 04:13:39.347620    7306 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:13:39.347631    7306 main.go:141] libmachine: STDERR: 
	I0731 04:13:39.347651    7306 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2
	I0731 04:13:39.347664    7306 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:13:39.347701    7306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:7e:b1:dc:08:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2
	I0731 04:13:39.349248    7306 main.go:141] libmachine: STDOUT: 
	I0731 04:13:39.349260    7306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:13:39.349279    7306 client.go:171] LocalClient.Create took 286.397083ms
	I0731 04:13:41.351396    7306 start.go:128] duration metric: createHost completed in 2.309115875s
	I0731 04:13:41.351489    7306 start.go:83] releasing machines lock for "kubernetes-upgrade-714000", held for 2.309221667s
	W0731 04:13:41.351558    7306 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:13:41.360918    7306 out.go:177] * Deleting "kubernetes-upgrade-714000" in qemu2 ...
	W0731 04:13:41.382448    7306 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:13:41.382473    7306 start.go:687] Will try again in 5 seconds ...
	I0731 04:13:46.384609    7306 start.go:365] acquiring machines lock for kubernetes-upgrade-714000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:13:46.385065    7306 start.go:369] acquired machines lock for "kubernetes-upgrade-714000" in 363.083µs
	I0731 04:13:46.385149    7306 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:k
ubernetes-upgrade-714000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:13:46.385432    7306 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:13:46.394867    7306 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:13:46.441455    7306 start.go:159] libmachine.API.Create for "kubernetes-upgrade-714000" (driver="qemu2")
	I0731 04:13:46.441487    7306 client.go:168] LocalClient.Create starting
	I0731 04:13:46.441646    7306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:13:46.441687    7306 main.go:141] libmachine: Decoding PEM data...
	I0731 04:13:46.441707    7306 main.go:141] libmachine: Parsing certificate...
	I0731 04:13:46.441797    7306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:13:46.441827    7306 main.go:141] libmachine: Decoding PEM data...
	I0731 04:13:46.441842    7306 main.go:141] libmachine: Parsing certificate...
	I0731 04:13:46.442371    7306 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:13:46.573565    7306 main.go:141] libmachine: Creating SSH key...
	I0731 04:13:46.685914    7306 main.go:141] libmachine: Creating Disk image...
	I0731 04:13:46.685922    7306 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:13:46.686065    7306 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2
	I0731 04:13:46.694489    7306 main.go:141] libmachine: STDOUT: 
	I0731 04:13:46.694503    7306 main.go:141] libmachine: STDERR: 
	I0731 04:13:46.694555    7306 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2 +20000M
	I0731 04:13:46.701757    7306 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:13:46.701769    7306 main.go:141] libmachine: STDERR: 
	I0731 04:13:46.701781    7306 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2
	I0731 04:13:46.701788    7306 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:13:46.701829    7306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:c9:4b:0c:11:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2
	I0731 04:13:46.703382    7306 main.go:141] libmachine: STDOUT: 
	I0731 04:13:46.703395    7306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:13:46.703408    7306 client.go:171] LocalClient.Create took 261.922583ms
	I0731 04:13:48.705519    7306 start.go:128] duration metric: createHost completed in 2.3201155s
	I0731 04:13:48.705587    7306 start.go:83] releasing machines lock for "kubernetes-upgrade-714000", held for 2.320551917s
	W0731 04:13:48.705941    7306 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:13:48.715353    7306 out.go:177] 
	W0731 04:13:48.719599    7306 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:13:48.719655    7306 out.go:239] * 
	* 
	W0731 04:13:48.722481    7306 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:13:48.730492    7306 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-714000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-714000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-714000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-714000 status --format={{.Host}}: exit status 7 (35.245583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-714000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-714000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.184016416s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-714000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-714000 in cluster kubernetes-upgrade-714000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-714000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-714000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:13:48.906759    7324 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:13:48.906858    7324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:13:48.906861    7324 out.go:309] Setting ErrFile to fd 2...
	I0731 04:13:48.906864    7324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:13:48.906975    7324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:13:48.907921    7324 out.go:303] Setting JSON to false
	I0731 04:13:48.922961    7324 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9799,"bootTime":1690792229,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:13:48.923039    7324 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:13:48.928046    7324 out.go:177] * [kubernetes-upgrade-714000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:13:48.938935    7324 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:13:48.934988    7324 notify.go:220] Checking for updates...
	I0731 04:13:48.946025    7324 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:13:48.949907    7324 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:13:48.953950    7324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:13:48.957034    7324 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:13:48.958330    7324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:13:48.961297    7324 config.go:182] Loaded profile config "kubernetes-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0731 04:13:48.961535    7324 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:13:48.965954    7324 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 04:13:48.970966    7324 start.go:298] selected driver: qemu2
	I0731 04:13:48.970983    7324 start.go:898] validating driver "qemu2" against &{Name:kubernetes-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kube
rnetes-upgrade-714000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:13:48.971049    7324 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:13:48.972993    7324 cni.go:84] Creating CNI manager for ""
	I0731 04:13:48.973008    7324 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:13:48.973014    7324 start_flags.go:319] config:
	{Name:kubernetes-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-714000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:13:48.977073    7324 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:13:48.985012    7324 out.go:177] * Starting control plane node kubernetes-upgrade-714000 in cluster kubernetes-upgrade-714000
	I0731 04:13:48.988940    7324 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:13:48.988967    7324 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:13:48.988978    7324 cache.go:57] Caching tarball of preloaded images
	I0731 04:13:48.989042    7324 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:13:48.989048    7324 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:13:48.989115    7324 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/kubernetes-upgrade-714000/config.json ...
	I0731 04:13:48.989484    7324 start.go:365] acquiring machines lock for kubernetes-upgrade-714000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:13:48.989514    7324 start.go:369] acquired machines lock for "kubernetes-upgrade-714000" in 23µs
	I0731 04:13:48.989524    7324 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:13:48.989529    7324 fix.go:54] fixHost starting: 
	I0731 04:13:48.989648    7324 fix.go:102] recreateIfNeeded on kubernetes-upgrade-714000: state=Stopped err=<nil>
	W0731 04:13:48.989657    7324 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:13:48.996964    7324 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-714000" ...
	I0731 04:13:49.001017    7324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:c9:4b:0c:11:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2
	I0731 04:13:49.002910    7324 main.go:141] libmachine: STDOUT: 
	I0731 04:13:49.002932    7324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:13:49.002962    7324 fix.go:56] fixHost completed within 13.434333ms
	I0731 04:13:49.002969    7324 start.go:83] releasing machines lock for "kubernetes-upgrade-714000", held for 13.450458ms
	W0731 04:13:49.002979    7324 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:13:49.003014    7324 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:13:49.003018    7324 start.go:687] Will try again in 5 seconds ...
	I0731 04:13:54.005008    7324 start.go:365] acquiring machines lock for kubernetes-upgrade-714000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:13:54.005386    7324 start.go:369] acquired machines lock for "kubernetes-upgrade-714000" in 307.666µs
	I0731 04:13:54.005540    7324 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:13:54.005565    7324 fix.go:54] fixHost starting: 
	I0731 04:13:54.006286    7324 fix.go:102] recreateIfNeeded on kubernetes-upgrade-714000: state=Stopped err=<nil>
	W0731 04:13:54.006318    7324 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:13:54.010657    7324 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-714000" ...
	I0731 04:13:54.017744    7324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:c9:4b:0c:11:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubernetes-upgrade-714000/disk.qcow2
	I0731 04:13:54.026954    7324 main.go:141] libmachine: STDOUT: 
	I0731 04:13:54.027024    7324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:13:54.027108    7324 fix.go:56] fixHost completed within 21.546291ms
	I0731 04:13:54.027128    7324 start.go:83] releasing machines lock for "kubernetes-upgrade-714000", held for 21.719708ms
	W0731 04:13:54.027308    7324 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-714000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-714000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:13:54.035679    7324 out.go:177] 
	W0731 04:13:54.039753    7324 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:13:54.039777    7324 out.go:239] * 
	* 
	W0731 04:13:54.044180    7324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:13:54.050691    7324 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-714000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-714000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-714000 version --output=json: exit status 1 (61.822042ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-714000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-07-31 04:13:54.126912 -0700 PDT m=+1217.458827543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-714000 -n kubernetes-upgrade-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-714000 -n kubernetes-upgrade-714000: exit status 7 (33.048167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-714000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-714000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-714000
--- FAIL: TestKubernetesUpgrade (15.32s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.43s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.1 on darwin (arm64)
- MINIKUBE_LOCATION=16968
- KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3495504790/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.43s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.14s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.1 on darwin (arm64)
- MINIKUBE_LOCATION=16968
- KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2976901343/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (166.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (166.03s)

                                                
                                    
x
+
TestPause/serial/Start (9.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-965000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-965000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.790718708s)

                                                
                                                
-- stdout --
	* [pause-965000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-965000 in cluster pause-965000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-965000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-965000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-965000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-965000 -n pause-965000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-965000 -n pause-965000: exit status 7 (69.664833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-965000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-578000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-578000 --driver=qemu2 : exit status 80 (9.813761292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-578000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-578000 in cluster NoKubernetes-578000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-578000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-578000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-578000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-578000 -n NoKubernetes-578000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-578000 -n NoKubernetes-578000: exit status 7 (69.456792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-578000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-578000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-578000 --no-kubernetes --driver=qemu2 : exit status 80 (5.40944575s)

                                                
                                                
-- stdout --
	* [NoKubernetes-578000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-578000
	* Restarting existing qemu2 VM for "NoKubernetes-578000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-578000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-578000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-578000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-578000 -n NoKubernetes-578000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-578000 -n NoKubernetes-578000: exit status 7 (68.836083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-578000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-578000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-578000 --no-kubernetes --driver=qemu2 : exit status 80 (5.398748459s)

                                                
                                                
-- stdout --
	* [NoKubernetes-578000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-578000
	* Restarting existing qemu2 VM for "NoKubernetes-578000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-578000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-578000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-578000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-578000 -n NoKubernetes-578000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-578000 -n NoKubernetes-578000: exit status 7 (69.34575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-578000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-578000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-578000 --driver=qemu2 : exit status 80 (5.399286125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-578000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-578000
	* Restarting existing qemu2 VM for "NoKubernetes-578000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-578000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-578000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-578000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-578000 -n NoKubernetes-578000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-578000 -n NoKubernetes-578000: exit status 7 (70.429667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-578000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0731 04:15:08.187721    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.782890459s)

                                                
                                                
-- stdout --
	* [auto-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-525000 in cluster auto-525000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-525000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:15:07.945849    7448 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:15:07.945986    7448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:15:07.945989    7448 out.go:309] Setting ErrFile to fd 2...
	I0731 04:15:07.945991    7448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:15:07.946101    7448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:15:07.947142    7448 out.go:303] Setting JSON to false
	I0731 04:15:07.962148    7448 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9878,"bootTime":1690792229,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:15:07.962211    7448 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:15:07.965951    7448 out.go:177] * [auto-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:15:07.972964    7448 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:15:07.973023    7448 notify.go:220] Checking for updates...
	I0731 04:15:07.976930    7448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:15:07.979973    7448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:15:07.982969    7448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:15:07.985952    7448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:15:07.988951    7448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:15:07.992284    7448 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:15:07.992324    7448 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:15:07.996901    7448 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:15:08.003979    7448 start.go:298] selected driver: qemu2
	I0731 04:15:08.003984    7448 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:15:08.003990    7448 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:15:08.005829    7448 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:15:08.008860    7448 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:15:08.011976    7448 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:15:08.011994    7448 cni.go:84] Creating CNI manager for ""
	I0731 04:15:08.012000    7448 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:15:08.012005    7448 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:15:08.012011    7448 start_flags.go:319] config:
	{Name:auto-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni F
eatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:15:08.016118    7448 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:15:08.022951    7448 out.go:177] * Starting control plane node auto-525000 in cluster auto-525000
	I0731 04:15:08.026919    7448 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:15:08.026944    7448 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:15:08.026954    7448 cache.go:57] Caching tarball of preloaded images
	I0731 04:15:08.027022    7448 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:15:08.027028    7448 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:15:08.027105    7448 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/auto-525000/config.json ...
	I0731 04:15:08.027117    7448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/auto-525000/config.json: {Name:mk9073ebb170da5c98bdd1204e834ef9f57f4ea7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:15:08.027351    7448 start.go:365] acquiring machines lock for auto-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:15:08.027383    7448 start.go:369] acquired machines lock for "auto-525000" in 25.292µs
	I0731 04:15:08.027393    7448 start.go:93] Provisioning new machine with config: &{Name:auto-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-525000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:15:08.027425    7448 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:15:08.034951    7448 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:15:08.051541    7448 start.go:159] libmachine.API.Create for "auto-525000" (driver="qemu2")
	I0731 04:15:08.051570    7448 client.go:168] LocalClient.Create starting
	I0731 04:15:08.051640    7448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:15:08.051668    7448 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:08.051677    7448 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:08.051731    7448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:15:08.051746    7448 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:08.051755    7448 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:08.052087    7448 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:15:08.171838    7448 main.go:141] libmachine: Creating SSH key...
	I0731 04:15:08.314656    7448 main.go:141] libmachine: Creating Disk image...
	I0731 04:15:08.314664    7448 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:15:08.314824    7448 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/disk.qcow2
	I0731 04:15:08.323397    7448 main.go:141] libmachine: STDOUT: 
	I0731 04:15:08.323412    7448 main.go:141] libmachine: STDERR: 
	I0731 04:15:08.323465    7448 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/disk.qcow2 +20000M
	I0731 04:15:08.330645    7448 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:15:08.330657    7448 main.go:141] libmachine: STDERR: 
	I0731 04:15:08.330673    7448 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/disk.qcow2
	I0731 04:15:08.330679    7448 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:15:08.330720    7448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:94:6e:1e:79:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/disk.qcow2
	I0731 04:15:08.332285    7448 main.go:141] libmachine: STDOUT: 
	I0731 04:15:08.332298    7448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:15:08.332314    7448 client.go:171] LocalClient.Create took 280.746083ms
	I0731 04:15:10.334428    7448 start.go:128] duration metric: createHost completed in 2.307035375s
	I0731 04:15:10.334490    7448 start.go:83] releasing machines lock for "auto-525000", held for 2.307150208s
	W0731 04:15:10.334560    7448 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:10.345598    7448 out.go:177] * Deleting "auto-525000" in qemu2 ...
	W0731 04:15:10.367904    7448 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:10.367934    7448 start.go:687] Will try again in 5 seconds ...
	I0731 04:15:15.370132    7448 start.go:365] acquiring machines lock for auto-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:15:15.370690    7448 start.go:369] acquired machines lock for "auto-525000" in 423.083µs
	I0731 04:15:15.370827    7448 start.go:93] Provisioning new machine with config: &{Name:auto-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-525000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:15:15.371079    7448 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:15:15.380741    7448 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:15:15.421374    7448 start.go:159] libmachine.API.Create for "auto-525000" (driver="qemu2")
	I0731 04:15:15.421425    7448 client.go:168] LocalClient.Create starting
	I0731 04:15:15.421619    7448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:15:15.421682    7448 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:15.421699    7448 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:15.421792    7448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:15:15.421822    7448 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:15.421836    7448 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:15.422386    7448 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:15:15.552106    7448 main.go:141] libmachine: Creating SSH key...
	I0731 04:15:15.642042    7448 main.go:141] libmachine: Creating Disk image...
	I0731 04:15:15.642048    7448 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:15:15.642181    7448 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/disk.qcow2
	I0731 04:15:15.650860    7448 main.go:141] libmachine: STDOUT: 
	I0731 04:15:15.650877    7448 main.go:141] libmachine: STDERR: 
	I0731 04:15:15.650950    7448 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/disk.qcow2 +20000M
	I0731 04:15:15.658113    7448 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:15:15.658131    7448 main.go:141] libmachine: STDERR: 
	I0731 04:15:15.658146    7448 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/disk.qcow2
	I0731 04:15:15.658152    7448 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:15:15.658193    7448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:1f:2e:66:aa:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/auto-525000/disk.qcow2
	I0731 04:15:15.659754    7448 main.go:141] libmachine: STDOUT: 
	I0731 04:15:15.659768    7448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:15:15.659780    7448 client.go:171] LocalClient.Create took 238.356125ms
	I0731 04:15:17.661891    7448 start.go:128] duration metric: createHost completed in 2.2908295s
	I0731 04:15:17.661950    7448 start.go:83] releasing machines lock for "auto-525000", held for 2.291287708s
	W0731 04:15:17.662313    7448 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:17.671970    7448 out.go:177] 
	W0731 04:15:17.675868    7448 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:15:17.675928    7448 out.go:239] * 
	* 
	W0731 04:15:17.678575    7448 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:15:17.686944    7448 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.731588916s)

                                                
                                                
-- stdout --
	* [kindnet-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-525000 in cluster kindnet-525000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-525000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:15:19.801273    7558 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:15:19.801398    7558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:15:19.801401    7558 out.go:309] Setting ErrFile to fd 2...
	I0731 04:15:19.801403    7558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:15:19.801516    7558 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:15:19.802531    7558 out.go:303] Setting JSON to false
	I0731 04:15:19.817710    7558 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9890,"bootTime":1690792229,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:15:19.817774    7558 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:15:19.822938    7558 out.go:177] * [kindnet-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:15:19.829836    7558 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:15:19.832914    7558 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:15:19.829893    7558 notify.go:220] Checking for updates...
	I0731 04:15:19.836849    7558 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:15:19.839848    7558 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:15:19.842867    7558 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:15:19.845844    7558 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:15:19.849166    7558 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:15:19.849211    7558 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:15:19.853870    7558 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:15:19.860801    7558 start.go:298] selected driver: qemu2
	I0731 04:15:19.860807    7558 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:15:19.860813    7558 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:15:19.862735    7558 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:15:19.866830    7558 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:15:19.869826    7558 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:15:19.869852    7558 cni.go:84] Creating CNI manager for "kindnet"
	I0731 04:15:19.869857    7558 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 04:15:19.869864    7558 start_flags.go:319] config:
	{Name:kindnet-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:15:19.873990    7558 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:15:19.881883    7558 out.go:177] * Starting control plane node kindnet-525000 in cluster kindnet-525000
	I0731 04:15:19.885787    7558 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:15:19.885814    7558 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:15:19.885834    7558 cache.go:57] Caching tarball of preloaded images
	I0731 04:15:19.885904    7558 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:15:19.885910    7558 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:15:19.885991    7558 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/kindnet-525000/config.json ...
	I0731 04:15:19.886009    7558 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/kindnet-525000/config.json: {Name:mkf749499a27d784a002a669ed473dad4cc7926d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:15:19.886225    7558 start.go:365] acquiring machines lock for kindnet-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:15:19.886258    7558 start.go:369] acquired machines lock for "kindnet-525000" in 26.667µs
	I0731 04:15:19.886269    7558 start.go:93] Provisioning new machine with config: &{Name:kindnet-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-5250
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:15:19.886302    7558 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:15:19.893816    7558 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:15:19.910449    7558 start.go:159] libmachine.API.Create for "kindnet-525000" (driver="qemu2")
	I0731 04:15:19.910482    7558 client.go:168] LocalClient.Create starting
	I0731 04:15:19.910565    7558 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:15:19.910585    7558 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:19.910594    7558 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:19.910644    7558 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:15:19.910658    7558 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:19.910667    7558 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:19.910978    7558 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:15:20.030619    7558 main.go:141] libmachine: Creating SSH key...
	I0731 04:15:20.079169    7558 main.go:141] libmachine: Creating Disk image...
	I0731 04:15:20.079176    7558 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:15:20.079320    7558 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/disk.qcow2
	I0731 04:15:20.087857    7558 main.go:141] libmachine: STDOUT: 
	I0731 04:15:20.087871    7558 main.go:141] libmachine: STDERR: 
	I0731 04:15:20.087915    7558 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/disk.qcow2 +20000M
	I0731 04:15:20.095288    7558 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:15:20.095311    7558 main.go:141] libmachine: STDERR: 
	I0731 04:15:20.095331    7558 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/disk.qcow2
	I0731 04:15:20.095343    7558 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:15:20.095399    7558 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:00:44:e6:95:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/disk.qcow2
	I0731 04:15:20.096949    7558 main.go:141] libmachine: STDOUT: 
	I0731 04:15:20.096967    7558 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:15:20.096984    7558 client.go:171] LocalClient.Create took 186.501709ms
	I0731 04:15:22.099115    7558 start.go:128] duration metric: createHost completed in 2.212843416s
	I0731 04:15:22.099193    7558 start.go:83] releasing machines lock for "kindnet-525000", held for 2.21297475s
	W0731 04:15:22.099304    7558 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:22.107562    7558 out.go:177] * Deleting "kindnet-525000" in qemu2 ...
	W0731 04:15:22.127747    7558 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:22.127820    7558 start.go:687] Will try again in 5 seconds ...
	I0731 04:15:27.129967    7558 start.go:365] acquiring machines lock for kindnet-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:15:27.130521    7558 start.go:369] acquired machines lock for "kindnet-525000" in 449.333µs
	I0731 04:15:27.130645    7558 start.go:93] Provisioning new machine with config: &{Name:kindnet-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-5250
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:15:27.130965    7558 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:15:27.137791    7558 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:15:27.187490    7558 start.go:159] libmachine.API.Create for "kindnet-525000" (driver="qemu2")
	I0731 04:15:27.187538    7558 client.go:168] LocalClient.Create starting
	I0731 04:15:27.187647    7558 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:15:27.187698    7558 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:27.187722    7558 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:27.187798    7558 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:15:27.187830    7558 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:27.187841    7558 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:27.188371    7558 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:15:27.320353    7558 main.go:141] libmachine: Creating SSH key...
	I0731 04:15:27.447282    7558 main.go:141] libmachine: Creating Disk image...
	I0731 04:15:27.447287    7558 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:15:27.447426    7558 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/disk.qcow2
	I0731 04:15:27.455901    7558 main.go:141] libmachine: STDOUT: 
	I0731 04:15:27.455919    7558 main.go:141] libmachine: STDERR: 
	I0731 04:15:27.455969    7558 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/disk.qcow2 +20000M
	I0731 04:15:27.463087    7558 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:15:27.463100    7558 main.go:141] libmachine: STDERR: 
	I0731 04:15:27.463112    7558 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/disk.qcow2
	I0731 04:15:27.463129    7558 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:15:27.463165    7558 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ae:d1:45:f4:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kindnet-525000/disk.qcow2
	I0731 04:15:27.464651    7558 main.go:141] libmachine: STDOUT: 
	I0731 04:15:27.464664    7558 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:15:27.464675    7558 client.go:171] LocalClient.Create took 277.136875ms
	I0731 04:15:29.466788    7558 start.go:128] duration metric: createHost completed in 2.335839625s
	I0731 04:15:29.466855    7558 start.go:83] releasing machines lock for "kindnet-525000", held for 2.336355417s
	W0731 04:15:29.467252    7558 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:29.475886    7558 out.go:177] 
	W0731 04:15:29.479848    7558 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:15:29.479903    7558 out.go:239] * 
	* 
	W0731 04:15:29.482530    7558 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:15:29.491835    7558 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
E0731 04:15:35.904595    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/ingress-addon-legacy-464000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.732843208s)

                                                
                                                
-- stdout --
	* [calico-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-525000 in cluster calico-525000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-525000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:15:31.710466    7672 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:15:31.710599    7672 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:15:31.710604    7672 out.go:309] Setting ErrFile to fd 2...
	I0731 04:15:31.710606    7672 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:15:31.710741    7672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:15:31.711844    7672 out.go:303] Setting JSON to false
	I0731 04:15:31.727254    7672 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9902,"bootTime":1690792229,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:15:31.727338    7672 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:15:31.732048    7672 out.go:177] * [calico-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:15:31.739025    7672 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:15:31.739090    7672 notify.go:220] Checking for updates...
	I0731 04:15:31.745994    7672 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:15:31.749041    7672 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:15:31.752008    7672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:15:31.754999    7672 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:15:31.757927    7672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:15:31.761323    7672 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:15:31.761366    7672 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:15:31.765941    7672 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:15:31.773026    7672 start.go:298] selected driver: qemu2
	I0731 04:15:31.773031    7672 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:15:31.773038    7672 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:15:31.775001    7672 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:15:31.777988    7672 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:15:31.781065    7672 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:15:31.781083    7672 cni.go:84] Creating CNI manager for "calico"
	I0731 04:15:31.781087    7672 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0731 04:15:31.781093    7672 start_flags.go:319] config:
	{Name:calico-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:calico-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:15:31.785209    7672 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:15:31.792805    7672 out.go:177] * Starting control plane node calico-525000 in cluster calico-525000
	I0731 04:15:31.796960    7672 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:15:31.796986    7672 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:15:31.796998    7672 cache.go:57] Caching tarball of preloaded images
	I0731 04:15:31.797092    7672 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:15:31.797114    7672 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:15:31.797193    7672 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/calico-525000/config.json ...
	I0731 04:15:31.797210    7672 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/calico-525000/config.json: {Name:mk2fb657b3e6ee7dcc33c518a2e2240d7f42322d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:15:31.797437    7672 start.go:365] acquiring machines lock for calico-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:15:31.797474    7672 start.go:369] acquired machines lock for "calico-525000" in 30.5µs
	I0731 04:15:31.797489    7672 start.go:93] Provisioning new machine with config: &{Name:calico-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:calico-525000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:15:31.797564    7672 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:15:31.804937    7672 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:15:31.821624    7672 start.go:159] libmachine.API.Create for "calico-525000" (driver="qemu2")
	I0731 04:15:31.821663    7672 client.go:168] LocalClient.Create starting
	I0731 04:15:31.821726    7672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:15:31.821748    7672 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:31.821757    7672 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:31.821813    7672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:15:31.821829    7672 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:31.821837    7672 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:31.822196    7672 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:15:31.942113    7672 main.go:141] libmachine: Creating SSH key...
	I0731 04:15:32.041928    7672 main.go:141] libmachine: Creating Disk image...
	I0731 04:15:32.041934    7672 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:15:32.042118    7672 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/disk.qcow2
	I0731 04:15:32.050534    7672 main.go:141] libmachine: STDOUT: 
	I0731 04:15:32.050549    7672 main.go:141] libmachine: STDERR: 
	I0731 04:15:32.050606    7672 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/disk.qcow2 +20000M
	I0731 04:15:32.057770    7672 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:15:32.057793    7672 main.go:141] libmachine: STDERR: 
	I0731 04:15:32.057803    7672 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/disk.qcow2
	I0731 04:15:32.057813    7672 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:15:32.057842    7672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:8a:f3:c4:cf:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/disk.qcow2
	I0731 04:15:32.059311    7672 main.go:141] libmachine: STDOUT: 
	I0731 04:15:32.059324    7672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:15:32.059342    7672 client.go:171] LocalClient.Create took 237.678083ms
	I0731 04:15:34.061466    7672 start.go:128] duration metric: createHost completed in 2.263936166s
	I0731 04:15:34.061559    7672 start.go:83] releasing machines lock for "calico-525000", held for 2.264100208s
	W0731 04:15:34.061623    7672 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:34.072653    7672 out.go:177] * Deleting "calico-525000" in qemu2 ...
	W0731 04:15:34.093714    7672 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:34.093758    7672 start.go:687] Will try again in 5 seconds ...
	I0731 04:15:39.095837    7672 start.go:365] acquiring machines lock for calico-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:15:39.096289    7672 start.go:369] acquired machines lock for "calico-525000" in 353.542µs
	I0731 04:15:39.096402    7672 start.go:93] Provisioning new machine with config: &{Name:calico-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:calico-525000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:15:39.097798    7672 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:15:39.100949    7672 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:15:39.143287    7672 start.go:159] libmachine.API.Create for "calico-525000" (driver="qemu2")
	I0731 04:15:39.143329    7672 client.go:168] LocalClient.Create starting
	I0731 04:15:39.143461    7672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:15:39.143509    7672 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:39.143534    7672 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:39.143623    7672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:15:39.143650    7672 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:39.143670    7672 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:39.144138    7672 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:15:39.273513    7672 main.go:141] libmachine: Creating SSH key...
	I0731 04:15:39.356301    7672 main.go:141] libmachine: Creating Disk image...
	I0731 04:15:39.356307    7672 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:15:39.356450    7672 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/disk.qcow2
	I0731 04:15:39.365187    7672 main.go:141] libmachine: STDOUT: 
	I0731 04:15:39.365199    7672 main.go:141] libmachine: STDERR: 
	I0731 04:15:39.365270    7672 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/disk.qcow2 +20000M
	I0731 04:15:39.372375    7672 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:15:39.372386    7672 main.go:141] libmachine: STDERR: 
	I0731 04:15:39.372401    7672 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/disk.qcow2
	I0731 04:15:39.372410    7672 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:15:39.372448    7672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:b3:2e:19:3b:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/calico-525000/disk.qcow2
	I0731 04:15:39.373997    7672 main.go:141] libmachine: STDOUT: 
	I0731 04:15:39.374015    7672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:15:39.374027    7672 client.go:171] LocalClient.Create took 230.695667ms
	I0731 04:15:41.376169    7672 start.go:128] duration metric: createHost completed in 2.278394084s
	I0731 04:15:41.376223    7672 start.go:83] releasing machines lock for "calico-525000", held for 2.279961875s
	W0731 04:15:41.376671    7672 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:41.386299    7672 out.go:177] 
	W0731 04:15:41.390357    7672 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:15:41.390389    7672 out.go:239] * 
	* 
	W0731 04:15:41.392912    7672 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:15:41.402263    7672 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.851587042s)

                                                
                                                
-- stdout --
	* [custom-flannel-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-525000 in cluster custom-flannel-525000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-525000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:15:43.754058    7790 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:15:43.754192    7790 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:15:43.754197    7790 out.go:309] Setting ErrFile to fd 2...
	I0731 04:15:43.754200    7790 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:15:43.754308    7790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:15:43.755323    7790 out.go:303] Setting JSON to false
	I0731 04:15:43.770694    7790 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9914,"bootTime":1690792229,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:15:43.770774    7790 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:15:43.776263    7790 out.go:177] * [custom-flannel-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:15:43.783310    7790 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:15:43.787209    7790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:15:43.783356    7790 notify.go:220] Checking for updates...
	I0731 04:15:43.794270    7790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:15:43.798241    7790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:15:43.801290    7790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:15:43.804350    7790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:15:43.807670    7790 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:15:43.807712    7790 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:15:43.815334    7790 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:15:43.820245    7790 start.go:298] selected driver: qemu2
	I0731 04:15:43.820249    7790 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:15:43.820258    7790 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:15:43.822183    7790 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:15:43.825258    7790 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:15:43.828379    7790 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:15:43.828405    7790 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0731 04:15:43.828430    7790 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0731 04:15:43.828437    7790 start_flags.go:319] config:
	{Name:custom-flannel-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAu
thSock: SSHAgentPID:0}
	I0731 04:15:43.832682    7790 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:15:43.841294    7790 out.go:177] * Starting control plane node custom-flannel-525000 in cluster custom-flannel-525000
	I0731 04:15:43.845254    7790 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:15:43.845288    7790 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:15:43.845301    7790 cache.go:57] Caching tarball of preloaded images
	I0731 04:15:43.845356    7790 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:15:43.845361    7790 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:15:43.845428    7790 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/custom-flannel-525000/config.json ...
	I0731 04:15:43.845441    7790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/custom-flannel-525000/config.json: {Name:mkc813212cf21f77a53bf50c9948ab53cea788d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:15:43.845659    7790 start.go:365] acquiring machines lock for custom-flannel-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:15:43.845692    7790 start.go:369] acquired machines lock for "custom-flannel-525000" in 25.916µs
	I0731 04:15:43.845705    7790 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:custo
m-flannel-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:15:43.845753    7790 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:15:43.850306    7790 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:15:43.867272    7790 start.go:159] libmachine.API.Create for "custom-flannel-525000" (driver="qemu2")
	I0731 04:15:43.867302    7790 client.go:168] LocalClient.Create starting
	I0731 04:15:43.867359    7790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:15:43.867381    7790 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:43.867397    7790 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:43.867446    7790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:15:43.867461    7790 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:43.867469    7790 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:43.867797    7790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:15:43.986117    7790 main.go:141] libmachine: Creating SSH key...
	I0731 04:15:44.099821    7790 main.go:141] libmachine: Creating Disk image...
	I0731 04:15:44.099829    7790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:15:44.099973    7790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/disk.qcow2
	I0731 04:15:44.108731    7790 main.go:141] libmachine: STDOUT: 
	I0731 04:15:44.108744    7790 main.go:141] libmachine: STDERR: 
	I0731 04:15:44.108796    7790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/disk.qcow2 +20000M
	I0731 04:15:44.115896    7790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:15:44.115910    7790 main.go:141] libmachine: STDERR: 
	I0731 04:15:44.115930    7790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/disk.qcow2
	I0731 04:15:44.115938    7790 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:15:44.115982    7790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:03:bf:16:de:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/disk.qcow2
	I0731 04:15:44.117562    7790 main.go:141] libmachine: STDOUT: 
	I0731 04:15:44.117576    7790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:15:44.117593    7790 client.go:171] LocalClient.Create took 250.29175ms
	I0731 04:15:46.119740    7790 start.go:128] duration metric: createHost completed in 2.274008917s
	I0731 04:15:46.119842    7790 start.go:83] releasing machines lock for "custom-flannel-525000", held for 2.274191375s
	W0731 04:15:46.119954    7790 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:46.127573    7790 out.go:177] * Deleting "custom-flannel-525000" in qemu2 ...
	W0731 04:15:46.150375    7790 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:46.150401    7790 start.go:687] Will try again in 5 seconds ...
	I0731 04:15:51.152545    7790 start.go:365] acquiring machines lock for custom-flannel-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:15:51.152967    7790 start.go:369] acquired machines lock for "custom-flannel-525000" in 327.791µs
	I0731 04:15:51.153080    7790 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:custo
m-flannel-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:15:51.153384    7790 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:15:51.163024    7790 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:15:51.210495    7790 start.go:159] libmachine.API.Create for "custom-flannel-525000" (driver="qemu2")
	I0731 04:15:51.210544    7790 client.go:168] LocalClient.Create starting
	I0731 04:15:51.210701    7790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:15:51.210746    7790 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:51.210774    7790 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:51.210842    7790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:15:51.210870    7790 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:51.210885    7790 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:51.211372    7790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:15:51.342327    7790 main.go:141] libmachine: Creating SSH key...
	I0731 04:15:51.518790    7790 main.go:141] libmachine: Creating Disk image...
	I0731 04:15:51.518801    7790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:15:51.518963    7790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/disk.qcow2
	I0731 04:15:51.527601    7790 main.go:141] libmachine: STDOUT: 
	I0731 04:15:51.527627    7790 main.go:141] libmachine: STDERR: 
	I0731 04:15:51.527695    7790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/disk.qcow2 +20000M
	I0731 04:15:51.534982    7790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:15:51.534994    7790 main.go:141] libmachine: STDERR: 
	I0731 04:15:51.535013    7790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/disk.qcow2
	I0731 04:15:51.535023    7790 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:15:51.535069    7790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:bd:33:20:12:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/custom-flannel-525000/disk.qcow2
	I0731 04:15:51.536627    7790 main.go:141] libmachine: STDOUT: 
	I0731 04:15:51.536639    7790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:15:51.536653    7790 client.go:171] LocalClient.Create took 326.112042ms
	I0731 04:15:53.538788    7790 start.go:128] duration metric: createHost completed in 2.385427334s
	I0731 04:15:53.538892    7790 start.go:83] releasing machines lock for "custom-flannel-525000", held for 2.385955958s
	W0731 04:15:53.539433    7790 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:53.548140    7790 out.go:177] 
	W0731 04:15:53.553133    7790 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:15:53.553179    7790 out.go:239] * 
	* 
	W0731 04:15:53.555536    7790 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:15:53.564067    7790 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.650359667s)

                                                
                                                
-- stdout --
	* [false-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-525000 in cluster false-525000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-525000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:15:55.893072    7910 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:15:55.893201    7910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:15:55.893204    7910 out.go:309] Setting ErrFile to fd 2...
	I0731 04:15:55.893207    7910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:15:55.893329    7910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:15:55.894420    7910 out.go:303] Setting JSON to false
	I0731 04:15:55.909635    7910 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9926,"bootTime":1690792229,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:15:55.909709    7910 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:15:55.914850    7910 out.go:177] * [false-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:15:55.921934    7910 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:15:55.921979    7910 notify.go:220] Checking for updates...
	I0731 04:15:55.925810    7910 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:15:55.929806    7910 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:15:55.932738    7910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:15:55.936784    7910 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:15:55.939855    7910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:15:55.943105    7910 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:15:55.943151    7910 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:15:55.947733    7910 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:15:55.953814    7910 start.go:298] selected driver: qemu2
	I0731 04:15:55.953824    7910 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:15:55.953834    7910 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:15:55.955905    7910 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:15:55.958812    7910 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:15:55.961943    7910 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:15:55.961964    7910 cni.go:84] Creating CNI manager for "false"
	I0731 04:15:55.961967    7910 start_flags.go:319] config:
	{Name:false-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:false-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: Fe
atureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:15:55.966143    7910 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:15:55.974815    7910 out.go:177] * Starting control plane node false-525000 in cluster false-525000
	I0731 04:15:55.978831    7910 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:15:55.978876    7910 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:15:55.978890    7910 cache.go:57] Caching tarball of preloaded images
	I0731 04:15:55.978961    7910 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:15:55.978966    7910 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:15:55.979046    7910 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/false-525000/config.json ...
	I0731 04:15:55.979059    7910 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/false-525000/config.json: {Name:mkb28c914fc29b3542af12a87b48b5ae9f2499cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:15:55.979263    7910 start.go:365] acquiring machines lock for false-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:15:55.979293    7910 start.go:369] acquired machines lock for "false-525000" in 24.542µs
	I0731 04:15:55.979303    7910 start.go:93] Provisioning new machine with config: &{Name:false-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:false-525000 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:15:55.979336    7910 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:15:55.982852    7910 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:15:55.998646    7910 start.go:159] libmachine.API.Create for "false-525000" (driver="qemu2")
	I0731 04:15:55.998668    7910 client.go:168] LocalClient.Create starting
	I0731 04:15:55.998723    7910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:15:55.998743    7910 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:55.998755    7910 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:55.998809    7910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:15:55.998824    7910 main.go:141] libmachine: Decoding PEM data...
	I0731 04:15:55.998833    7910 main.go:141] libmachine: Parsing certificate...
	I0731 04:15:55.999157    7910 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:15:56.118465    7910 main.go:141] libmachine: Creating SSH key...
	I0731 04:15:56.151930    7910 main.go:141] libmachine: Creating Disk image...
	I0731 04:15:56.151935    7910 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:15:56.152086    7910 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/disk.qcow2
	I0731 04:15:56.160559    7910 main.go:141] libmachine: STDOUT: 
	I0731 04:15:56.160570    7910 main.go:141] libmachine: STDERR: 
	I0731 04:15:56.160614    7910 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/disk.qcow2 +20000M
	I0731 04:15:56.167762    7910 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:15:56.167773    7910 main.go:141] libmachine: STDERR: 
	I0731 04:15:56.167788    7910 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/disk.qcow2
	I0731 04:15:56.167797    7910 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:15:56.167826    7910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:75:e4:72:b9:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/disk.qcow2
	I0731 04:15:56.169327    7910 main.go:141] libmachine: STDOUT: 
	I0731 04:15:56.169339    7910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:15:56.169357    7910 client.go:171] LocalClient.Create took 170.686708ms
	I0731 04:15:58.171566    7910 start.go:128] duration metric: createHost completed in 2.192239667s
	I0731 04:15:58.171634    7910 start.go:83] releasing machines lock for "false-525000", held for 2.19237975s
	W0731 04:15:58.171687    7910 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:58.179798    7910 out.go:177] * Deleting "false-525000" in qemu2 ...
	W0731 04:15:58.201925    7910 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:15:58.201951    7910 start.go:687] Will try again in 5 seconds ...
	I0731 04:16:03.204092    7910 start.go:365] acquiring machines lock for false-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:03.204513    7910 start.go:369] acquired machines lock for "false-525000" in 307.333µs
	I0731 04:16:03.204627    7910 start.go:93] Provisioning new machine with config: &{Name:false-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:false-525000 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:03.204887    7910 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:03.207035    7910 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:16:03.254686    7910 start.go:159] libmachine.API.Create for "false-525000" (driver="qemu2")
	I0731 04:16:03.254740    7910 client.go:168] LocalClient.Create starting
	I0731 04:16:03.254889    7910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:03.254937    7910 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:03.254959    7910 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:03.255039    7910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:03.255089    7910 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:03.255105    7910 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:03.255741    7910 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:03.388695    7910 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:03.457060    7910 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:03.457066    7910 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:03.457354    7910 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/disk.qcow2
	I0731 04:16:03.466006    7910 main.go:141] libmachine: STDOUT: 
	I0731 04:16:03.466019    7910 main.go:141] libmachine: STDERR: 
	I0731 04:16:03.466075    7910 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/disk.qcow2 +20000M
	I0731 04:16:03.473330    7910 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:03.473346    7910 main.go:141] libmachine: STDERR: 
	I0731 04:16:03.473362    7910 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/disk.qcow2
	I0731 04:16:03.473379    7910 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:03.473423    7910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:06:f5:65:85:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/false-525000/disk.qcow2
	I0731 04:16:03.475034    7910 main.go:141] libmachine: STDOUT: 
	I0731 04:16:03.475048    7910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:03.475061    7910 client.go:171] LocalClient.Create took 220.320042ms
	I0731 04:16:05.477176    7910 start.go:128] duration metric: createHost completed in 2.272296833s
	I0731 04:16:05.477266    7910 start.go:83] releasing machines lock for "false-525000", held for 2.272752292s
	W0731 04:16:05.477700    7910 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:05.487629    7910 out.go:177] 
	W0731 04:16:05.492717    7910 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:16:05.492741    7910 out.go:239] * 
	* 
	W0731 04:16:05.495286    7910 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:16:05.506580    7910 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.738028916s)

                                                
                                                
-- stdout --
	* [enable-default-cni-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-525000 in cluster enable-default-cni-525000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-525000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:16:07.659157    8022 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:16:07.659268    8022 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:07.659271    8022 out.go:309] Setting ErrFile to fd 2...
	I0731 04:16:07.659274    8022 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:07.659373    8022 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:16:07.660387    8022 out.go:303] Setting JSON to false
	I0731 04:16:07.675409    8022 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9938,"bootTime":1690792229,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:16:07.675463    8022 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:16:07.680446    8022 out.go:177] * [enable-default-cni-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:16:07.687661    8022 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:16:07.687674    8022 notify.go:220] Checking for updates...
	I0731 04:16:07.694561    8022 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:16:07.697662    8022 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:16:07.701583    8022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:16:07.704647    8022 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:16:07.707611    8022 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:16:07.710889    8022 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:16:07.710932    8022 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:16:07.715582    8022 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:16:07.721554    8022 start.go:298] selected driver: qemu2
	I0731 04:16:07.721565    8022 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:16:07.721572    8022 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:16:07.723396    8022 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:16:07.726634    8022 out.go:177] * Automatically selected the socket_vmnet network
	E0731 04:16:07.729680    8022 start_flags.go:453] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0731 04:16:07.729692    8022 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:16:07.729716    8022 cni.go:84] Creating CNI manager for "bridge"
	I0731 04:16:07.729720    8022 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:16:07.729733    8022 start_flags.go:319] config:
	{Name:enable-default-cni-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:enable-default-cni-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0731 04:16:07.734006    8022 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:07.741603    8022 out.go:177] * Starting control plane node enable-default-cni-525000 in cluster enable-default-cni-525000
	I0731 04:16:07.745646    8022 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:16:07.745670    8022 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:16:07.745679    8022 cache.go:57] Caching tarball of preloaded images
	I0731 04:16:07.745749    8022 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:16:07.745754    8022 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:16:07.745813    8022 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/enable-default-cni-525000/config.json ...
	I0731 04:16:07.745825    8022 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/enable-default-cni-525000/config.json: {Name:mk458114c9ea6d5868938385cd02fab3f8241ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:16:07.746034    8022 start.go:365] acquiring machines lock for enable-default-cni-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:07.746066    8022 start.go:369] acquired machines lock for "enable-default-cni-525000" in 24.458µs
	I0731 04:16:07.746077    8022 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:e
nable-default-cni-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:07.746110    8022 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:07.754429    8022 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:16:07.770561    8022 start.go:159] libmachine.API.Create for "enable-default-cni-525000" (driver="qemu2")
	I0731 04:16:07.770587    8022 client.go:168] LocalClient.Create starting
	I0731 04:16:07.770660    8022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:07.770685    8022 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:07.770697    8022 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:07.770744    8022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:07.770760    8022 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:07.770766    8022 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:07.771392    8022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:07.889131    8022 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:07.962759    8022 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:07.962765    8022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:07.962904    8022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/disk.qcow2
	I0731 04:16:07.971443    8022 main.go:141] libmachine: STDOUT: 
	I0731 04:16:07.971455    8022 main.go:141] libmachine: STDERR: 
	I0731 04:16:07.971510    8022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/disk.qcow2 +20000M
	I0731 04:16:07.978755    8022 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:07.978766    8022 main.go:141] libmachine: STDERR: 
	I0731 04:16:07.978785    8022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/disk.qcow2
	I0731 04:16:07.978794    8022 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:07.978830    8022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:6a:53:d2:ca:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/disk.qcow2
	I0731 04:16:07.980324    8022 main.go:141] libmachine: STDOUT: 
	I0731 04:16:07.980334    8022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:07.980351    8022 client.go:171] LocalClient.Create took 209.764333ms
	I0731 04:16:09.982508    8022 start.go:128] duration metric: createHost completed in 2.236427583s
	I0731 04:16:09.982580    8022 start.go:83] releasing machines lock for "enable-default-cni-525000", held for 2.236555333s
	W0731 04:16:09.982688    8022 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:09.988968    8022 out.go:177] * Deleting "enable-default-cni-525000" in qemu2 ...
	W0731 04:16:10.015933    8022 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:10.015958    8022 start.go:687] Will try again in 5 seconds ...
	I0731 04:16:15.018098    8022 start.go:365] acquiring machines lock for enable-default-cni-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:15.018616    8022 start.go:369] acquired machines lock for "enable-default-cni-525000" in 415µs
	I0731 04:16:15.018759    8022 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:e
nable-default-cni-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:15.019122    8022 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:15.028931    8022 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:16:15.077965    8022 start.go:159] libmachine.API.Create for "enable-default-cni-525000" (driver="qemu2")
	I0731 04:16:15.078008    8022 client.go:168] LocalClient.Create starting
	I0731 04:16:15.078146    8022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:15.078195    8022 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:15.078214    8022 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:15.078300    8022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:15.078326    8022 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:15.078340    8022 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:15.078865    8022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:15.210594    8022 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:15.307703    8022 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:15.307708    8022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:15.307861    8022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/disk.qcow2
	I0731 04:16:15.316729    8022 main.go:141] libmachine: STDOUT: 
	I0731 04:16:15.316743    8022 main.go:141] libmachine: STDERR: 
	I0731 04:16:15.316803    8022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/disk.qcow2 +20000M
	I0731 04:16:15.323950    8022 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:15.323975    8022 main.go:141] libmachine: STDERR: 
	I0731 04:16:15.323986    8022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/disk.qcow2
	I0731 04:16:15.324001    8022 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:15.324039    8022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:b1:d4:ee:91:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/enable-default-cni-525000/disk.qcow2
	I0731 04:16:15.325663    8022 main.go:141] libmachine: STDOUT: 
	I0731 04:16:15.325679    8022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:15.325689    8022 client.go:171] LocalClient.Create took 247.681167ms
	I0731 04:16:17.327810    8022 start.go:128] duration metric: createHost completed in 2.308709959s
	I0731 04:16:17.327874    8022 start.go:83] releasing machines lock for "enable-default-cni-525000", held for 2.309283875s
	W0731 04:16:17.328285    8022 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:17.338826    8022 out.go:177] 
	W0731 04:16:17.342831    8022 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:16:17.342882    8022 out.go:239] * 
	* 
	W0731 04:16:17.345309    8022 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:16:17.355787    8022 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.730651583s)

                                                
                                                
-- stdout --
	* [flannel-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-525000 in cluster flannel-525000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-525000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:16:19.513727    8134 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:16:19.513856    8134 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:19.513859    8134 out.go:309] Setting ErrFile to fd 2...
	I0731 04:16:19.513861    8134 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:19.513976    8134 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:16:19.515016    8134 out.go:303] Setting JSON to false
	I0731 04:16:19.530314    8134 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9950,"bootTime":1690792229,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:16:19.530378    8134 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:16:19.535643    8134 out.go:177] * [flannel-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:16:19.542621    8134 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:16:19.542652    8134 notify.go:220] Checking for updates...
	I0731 04:16:19.546558    8134 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:16:19.550654    8134 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:16:19.553568    8134 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:16:19.556557    8134 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:16:19.559571    8134 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:16:19.562956    8134 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:16:19.563007    8134 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:16:19.567527    8134 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:16:19.574580    8134 start.go:298] selected driver: qemu2
	I0731 04:16:19.574584    8134 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:16:19.574598    8134 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:16:19.576506    8134 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:16:19.579561    8134 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:16:19.582592    8134 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:16:19.582609    8134 cni.go:84] Creating CNI manager for "flannel"
	I0731 04:16:19.582613    8134 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0731 04:16:19.582618    8134 start_flags.go:319] config:
	{Name:flannel-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:flannel-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:16:19.586720    8134 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:19.593526    8134 out.go:177] * Starting control plane node flannel-525000 in cluster flannel-525000
	I0731 04:16:19.597550    8134 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:16:19.597582    8134 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:16:19.597603    8134 cache.go:57] Caching tarball of preloaded images
	I0731 04:16:19.597681    8134 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:16:19.597687    8134 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:16:19.597756    8134 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/flannel-525000/config.json ...
	I0731 04:16:19.597768    8134 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/flannel-525000/config.json: {Name:mke7dc808eef962a62469305c095058187d34116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:16:19.597967    8134 start.go:365] acquiring machines lock for flannel-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:19.597998    8134 start.go:369] acquired machines lock for "flannel-525000" in 25.166µs
	I0731 04:16:19.598009    8134 start.go:93] Provisioning new machine with config: &{Name:flannel-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:flannel-5250
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:19.598039    8134 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:19.605649    8134 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:16:19.622041    8134 start.go:159] libmachine.API.Create for "flannel-525000" (driver="qemu2")
	I0731 04:16:19.622068    8134 client.go:168] LocalClient.Create starting
	I0731 04:16:19.622126    8134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:19.622146    8134 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:19.622158    8134 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:19.622208    8134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:19.622223    8134 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:19.622230    8134 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:19.622565    8134 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:19.740727    8134 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:19.785339    8134 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:19.785347    8134 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:19.785492    8134 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/disk.qcow2
	I0731 04:16:19.793971    8134 main.go:141] libmachine: STDOUT: 
	I0731 04:16:19.793987    8134 main.go:141] libmachine: STDERR: 
	I0731 04:16:19.794042    8134 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/disk.qcow2 +20000M
	I0731 04:16:19.801279    8134 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:19.801291    8134 main.go:141] libmachine: STDERR: 
	I0731 04:16:19.801304    8134 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/disk.qcow2
	I0731 04:16:19.801310    8134 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:19.801345    8134 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:6a:cc:75:50:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/disk.qcow2
	I0731 04:16:19.802873    8134 main.go:141] libmachine: STDOUT: 
	I0731 04:16:19.802887    8134 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:19.802906    8134 client.go:171] LocalClient.Create took 180.838083ms
	I0731 04:16:21.803915    8134 start.go:128] duration metric: createHost completed in 2.205912166s
	I0731 04:16:21.803978    8134 start.go:83] releasing machines lock for "flannel-525000", held for 2.206020667s
	W0731 04:16:21.804049    8134 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:21.811143    8134 out.go:177] * Deleting "flannel-525000" in qemu2 ...
	W0731 04:16:21.831983    8134 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:21.832010    8134 start.go:687] Will try again in 5 seconds ...
	I0731 04:16:26.834177    8134 start.go:365] acquiring machines lock for flannel-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:26.834805    8134 start.go:369] acquired machines lock for "flannel-525000" in 487.208µs
	I0731 04:16:26.834905    8134 start.go:93] Provisioning new machine with config: &{Name:flannel-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:flannel-5250
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:26.835159    8134 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:26.843950    8134 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:16:26.889177    8134 start.go:159] libmachine.API.Create for "flannel-525000" (driver="qemu2")
	I0731 04:16:26.889227    8134 client.go:168] LocalClient.Create starting
	I0731 04:16:26.889370    8134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:26.889433    8134 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:26.889461    8134 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:26.889539    8134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:26.889567    8134 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:26.889581    8134 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:26.890128    8134 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:27.021572    8134 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:27.157272    8134 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:27.157279    8134 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:27.157502    8134 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/disk.qcow2
	I0731 04:16:27.166402    8134 main.go:141] libmachine: STDOUT: 
	I0731 04:16:27.166419    8134 main.go:141] libmachine: STDERR: 
	I0731 04:16:27.166483    8134 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/disk.qcow2 +20000M
	I0731 04:16:27.173625    8134 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:27.173642    8134 main.go:141] libmachine: STDERR: 
	I0731 04:16:27.173662    8134 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/disk.qcow2
	I0731 04:16:27.173669    8134 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:27.173719    8134 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:a4:10:02:11:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/flannel-525000/disk.qcow2
	I0731 04:16:27.175258    8134 main.go:141] libmachine: STDOUT: 
	I0731 04:16:27.175273    8134 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:27.175299    8134 client.go:171] LocalClient.Create took 286.06275ms
	I0731 04:16:29.177404    8134 start.go:128] duration metric: createHost completed in 2.342276s
	I0731 04:16:29.177471    8134 start.go:83] releasing machines lock for "flannel-525000", held for 2.342694292s
	W0731 04:16:29.177879    8134 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:29.187490    8134 out.go:177] 
	W0731 04:16:29.191495    8134 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:16:29.191550    8134 out.go:239] * 
	* 
	W0731 04:16:29.194294    8134 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:16:29.203527    8134 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.872142209s)

                                                
                                                
-- stdout --
	* [bridge-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-525000 in cluster bridge-525000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-525000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:16:31.545440    8252 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:16:31.545567    8252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:31.545570    8252 out.go:309] Setting ErrFile to fd 2...
	I0731 04:16:31.545573    8252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:31.545678    8252 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:16:31.546704    8252 out.go:303] Setting JSON to false
	I0731 04:16:31.562139    8252 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9962,"bootTime":1690792229,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:16:31.562227    8252 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:16:31.567181    8252 out.go:177] * [bridge-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:16:31.574165    8252 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:16:31.574214    8252 notify.go:220] Checking for updates...
	I0731 04:16:31.577319    8252 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:16:31.581209    8252 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:16:31.584213    8252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:16:31.587168    8252 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:16:31.590213    8252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:16:31.593436    8252 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:16:31.593477    8252 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:16:31.597178    8252 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:16:31.604165    8252 start.go:298] selected driver: qemu2
	I0731 04:16:31.604170    8252 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:16:31.604177    8252 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:16:31.605975    8252 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:16:31.609135    8252 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:16:31.612289    8252 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:16:31.612314    8252 cni.go:84] Creating CNI manager for "bridge"
	I0731 04:16:31.612319    8252 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:16:31.612335    8252 start_flags.go:319] config:
	{Name:bridge-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:bridge-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:16:31.616421    8252 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:31.623161    8252 out.go:177] * Starting control plane node bridge-525000 in cluster bridge-525000
	I0731 04:16:31.627018    8252 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:16:31.627041    8252 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:16:31.627051    8252 cache.go:57] Caching tarball of preloaded images
	I0731 04:16:31.627118    8252 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:16:31.627123    8252 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:16:31.627189    8252 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/bridge-525000/config.json ...
	I0731 04:16:31.627201    8252 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/bridge-525000/config.json: {Name:mk2653e531c1716846660c76162ce3ddcde99e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:16:31.627426    8252 start.go:365] acquiring machines lock for bridge-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:31.627454    8252 start.go:369] acquired machines lock for "bridge-525000" in 22.958µs
	I0731 04:16:31.627465    8252 start.go:93] Provisioning new machine with config: &{Name:bridge-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:bridge-525000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:31.627491    8252 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:31.634107    8252 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:16:31.649761    8252 start.go:159] libmachine.API.Create for "bridge-525000" (driver="qemu2")
	I0731 04:16:31.649786    8252 client.go:168] LocalClient.Create starting
	I0731 04:16:31.649846    8252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:31.649865    8252 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:31.649873    8252 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:31.649918    8252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:31.649932    8252 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:31.649937    8252 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:31.650220    8252 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:31.769675    8252 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:31.962684    8252 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:31.962698    8252 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:31.962891    8252 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2
	I0731 04:16:31.971945    8252 main.go:141] libmachine: STDOUT: 
	I0731 04:16:31.971974    8252 main.go:141] libmachine: STDERR: 
	I0731 04:16:31.972037    8252 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2 +20000M
	I0731 04:16:31.979509    8252 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:31.979521    8252 main.go:141] libmachine: STDERR: 
	I0731 04:16:31.979540    8252 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2
	I0731 04:16:31.979548    8252 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:31.979592    8252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:4c:49:3e:0d:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2
	I0731 04:16:31.981082    8252 main.go:141] libmachine: STDOUT: 
	I0731 04:16:31.981096    8252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:31.981116    8252 client.go:171] LocalClient.Create took 331.333167ms
	I0731 04:16:33.983226    8252 start.go:128] duration metric: createHost completed in 2.355772166s
	I0731 04:16:33.983289    8252 start.go:83] releasing machines lock for "bridge-525000", held for 2.355878834s
	W0731 04:16:33.983400    8252 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:33.991951    8252 out.go:177] * Deleting "bridge-525000" in qemu2 ...
	W0731 04:16:34.014019    8252 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:34.014047    8252 start.go:687] Will try again in 5 seconds ...
	I0731 04:16:39.016133    8252 start.go:365] acquiring machines lock for bridge-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:39.016657    8252 start.go:369] acquired machines lock for "bridge-525000" in 421.25µs
	I0731 04:16:39.016765    8252 start.go:93] Provisioning new machine with config: &{Name:bridge-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:bridge-525000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:39.017017    8252 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:39.023591    8252 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:16:39.070993    8252 start.go:159] libmachine.API.Create for "bridge-525000" (driver="qemu2")
	I0731 04:16:39.071037    8252 client.go:168] LocalClient.Create starting
	I0731 04:16:39.071191    8252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:39.071245    8252 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:39.071264    8252 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:39.071377    8252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:39.071411    8252 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:39.071424    8252 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:39.071938    8252 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:39.205421    8252 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:39.328473    8252 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:39.328479    8252 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:39.328642    8252 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2
	I0731 04:16:39.336983    8252 main.go:141] libmachine: STDOUT: 
	I0731 04:16:39.336995    8252 main.go:141] libmachine: STDERR: 
	I0731 04:16:39.337046    8252 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2 +20000M
	I0731 04:16:39.344348    8252 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:39.344361    8252 main.go:141] libmachine: STDERR: 
	I0731 04:16:39.344373    8252 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2
	I0731 04:16:39.344381    8252 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:39.344425    8252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:e2:23:64:c8:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2
	I0731 04:16:39.345909    8252 main.go:141] libmachine: STDOUT: 
	I0731 04:16:39.345923    8252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:39.345936    8252 client.go:171] LocalClient.Create took 274.898208ms
	I0731 04:16:41.348045    8252 start.go:128] duration metric: createHost completed in 2.331060375s
	I0731 04:16:41.348146    8252 start.go:83] releasing machines lock for "bridge-525000", held for 2.331482333s
	W0731 04:16:41.348573    8252 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:41.360169    8252 out.go:177] 
	W0731 04:16:41.364177    8252 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:16:41.364205    8252 out.go:239] * 
	* 
	W0731 04:16:41.366783    8252 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:16:41.376116    8252 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (1.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.659059780.exe start -p stopped-upgrade-844000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.659059780.exe start -p stopped-upgrade-844000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.659059780.exe: permission denied (6.297083ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.659059780.exe start -p stopped-upgrade-844000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.659059780.exe start -p stopped-upgrade-844000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.659059780.exe: permission denied (5.081875ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.659059780.exe start -p stopped-upgrade-844000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.659059780.exe start -p stopped-upgrade-844000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.659059780.exe: permission denied (1.139709ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.659059780.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (1.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-844000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-844000: exit status 85 (78.19575ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000 sudo cat                | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000 sudo cat                | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000 sudo cat                | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-525000                         | enable-default-cni-525000 | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT | 31 Jul 23 04:16 PDT |
	| start   | -p flannel-525000                                    | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=flannel --driver=qemu2                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo cat                           | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo cat                           | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo cat                           | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo crictl                        | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo crictl                        | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | ps --all                                             |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo find                          | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo ip a s                        | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	| ssh     | -p flannel-525000 sudo ip r s                        | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | iptables-save                                        |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | iptables -t nat -L -n -v                             |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo cat                           | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /run/flannel/subnet.env                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo cat                           | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo cat                           | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo cat                           | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo cat                           | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo docker                        | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo cat                           | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo cat                           | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo cat                           | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo cat                           | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo                               | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo find                          | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-525000 sudo crio                          | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p flannel-525000                                    | flannel-525000            | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT | 31 Jul 23 04:16 PDT |
	| start   | -p bridge-525000 --memory=3072                       | bridge-525000             | jenkins | v1.31.1 | 31 Jul 23 04:16 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=qemu2                          |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 04:16:31
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 04:16:31.545440    8252 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:16:31.545567    8252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:31.545570    8252 out.go:309] Setting ErrFile to fd 2...
	I0731 04:16:31.545573    8252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:31.545678    8252 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:16:31.546704    8252 out.go:303] Setting JSON to false
	I0731 04:16:31.562139    8252 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9962,"bootTime":1690792229,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:16:31.562227    8252 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:16:31.567181    8252 out.go:177] * [bridge-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:16:31.574165    8252 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:16:31.574214    8252 notify.go:220] Checking for updates...
	I0731 04:16:31.577319    8252 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:16:31.581209    8252 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:16:31.584213    8252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:16:31.587168    8252 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:16:31.590213    8252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:16:31.593436    8252 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:16:31.593477    8252 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:16:31.597178    8252 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:16:31.604165    8252 start.go:298] selected driver: qemu2
	I0731 04:16:31.604170    8252 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:16:31.604177    8252 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:16:31.605975    8252 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:16:31.609135    8252 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:16:31.612289    8252 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:16:31.612314    8252 cni.go:84] Creating CNI manager for "bridge"
	I0731 04:16:31.612319    8252 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:16:31.612335    8252 start_flags.go:319] config:
	{Name:bridge-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:bridge-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:16:31.616421    8252 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:31.623161    8252 out.go:177] * Starting control plane node bridge-525000 in cluster bridge-525000
	I0731 04:16:31.627018    8252 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:16:31.627041    8252 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:16:31.627051    8252 cache.go:57] Caching tarball of preloaded images
	I0731 04:16:31.627118    8252 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:16:31.627123    8252 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:16:31.627189    8252 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/bridge-525000/config.json ...
	I0731 04:16:31.627201    8252 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/bridge-525000/config.json: {Name:mk2653e531c1716846660c76162ce3ddcde99e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:16:31.627426    8252 start.go:365] acquiring machines lock for bridge-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:31.627454    8252 start.go:369] acquired machines lock for "bridge-525000" in 22.958µs
	I0731 04:16:31.627465    8252 start.go:93] Provisioning new machine with config: &{Name:bridge-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:bridge-525000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:31.627491    8252 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:31.634107    8252 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:16:31.649761    8252 start.go:159] libmachine.API.Create for "bridge-525000" (driver="qemu2")
	I0731 04:16:31.649786    8252 client.go:168] LocalClient.Create starting
	I0731 04:16:31.649846    8252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:31.649865    8252 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:31.649873    8252 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:31.649918    8252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:31.649932    8252 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:31.649937    8252 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:31.650220    8252 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:31.769675    8252 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:31.962684    8252 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:31.962698    8252 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:31.962891    8252 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2
	I0731 04:16:31.971945    8252 main.go:141] libmachine: STDOUT: 
	I0731 04:16:31.971974    8252 main.go:141] libmachine: STDERR: 
	I0731 04:16:31.972037    8252 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2 +20000M
	I0731 04:16:31.979509    8252 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:31.979521    8252 main.go:141] libmachine: STDERR: 
	I0731 04:16:31.979540    8252 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2
	I0731 04:16:31.979548    8252 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:31.979592    8252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:4c:49:3e:0d:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2
	I0731 04:16:31.981082    8252 main.go:141] libmachine: STDOUT: 
	I0731 04:16:31.981096    8252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:31.981116    8252 client.go:171] LocalClient.Create took 331.333167ms
	I0731 04:16:33.983226    8252 start.go:128] duration metric: createHost completed in 2.355772166s
	I0731 04:16:33.983289    8252 start.go:83] releasing machines lock for "bridge-525000", held for 2.355878834s
	W0731 04:16:33.983400    8252 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:33.991951    8252 out.go:177] * Deleting "bridge-525000" in qemu2 ...
	W0731 04:16:34.014019    8252 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:34.014047    8252 start.go:687] Will try again in 5 seconds ...
	I0731 04:16:39.016133    8252 start.go:365] acquiring machines lock for bridge-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:39.016657    8252 start.go:369] acquired machines lock for "bridge-525000" in 421.25µs
	I0731 04:16:39.016765    8252 start.go:93] Provisioning new machine with config: &{Name:bridge-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:bridge-525000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:39.017017    8252 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:39.023591    8252 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:16:39.070993    8252 start.go:159] libmachine.API.Create for "bridge-525000" (driver="qemu2")
	I0731 04:16:39.071037    8252 client.go:168] LocalClient.Create starting
	I0731 04:16:39.071191    8252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:39.071245    8252 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:39.071264    8252 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:39.071377    8252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:39.071411    8252 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:39.071424    8252 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:39.071938    8252 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:39.205421    8252 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:39.328473    8252 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:39.328479    8252 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:39.328642    8252 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2
	I0731 04:16:39.336983    8252 main.go:141] libmachine: STDOUT: 
	I0731 04:16:39.336995    8252 main.go:141] libmachine: STDERR: 
	I0731 04:16:39.337046    8252 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2 +20000M
	I0731 04:16:39.344348    8252 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:39.344361    8252 main.go:141] libmachine: STDERR: 
	I0731 04:16:39.344373    8252 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2
	I0731 04:16:39.344381    8252 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:39.344425    8252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:e2:23:64:c8:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/bridge-525000/disk.qcow2
	I0731 04:16:39.345909    8252 main.go:141] libmachine: STDOUT: 
	I0731 04:16:39.345923    8252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:39.345936    8252 client.go:171] LocalClient.Create took 274.898208ms
	I0731 04:16:41.348045    8252 start.go:128] duration metric: createHost completed in 2.331060375s
	I0731 04:16:41.348146    8252 start.go:83] releasing machines lock for "bridge-525000", held for 2.331482333s
	W0731 04:16:41.348573    8252 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:41.360169    8252 out.go:177] 
	W0731 04:16:41.364177    8252 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:16:41.364205    8252 out.go:239] * 
	W0731 04:16:41.366783    8252 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:16:41.376116    8252 out.go:177] 
	
	* 
	* Profile "stopped-upgrade-844000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-844000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-525000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.83503875s)

                                                
                                                
-- stdout --
	* [kubenet-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-525000 in cluster kubenet-525000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-525000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:16:42.022127    8309 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:16:42.022256    8309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:42.022259    8309 out.go:309] Setting ErrFile to fd 2...
	I0731 04:16:42.022262    8309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:42.022380    8309 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:16:42.023514    8309 out.go:303] Setting JSON to false
	I0731 04:16:42.040062    8309 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9973,"bootTime":1690792229,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:16:42.040136    8309 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:16:42.044848    8309 out.go:177] * [kubenet-525000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:16:42.050792    8309 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:16:42.053837    8309 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:16:42.050833    8309 notify.go:220] Checking for updates...
	I0731 04:16:42.060753    8309 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:16:42.063803    8309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:16:42.066754    8309 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:16:42.069792    8309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:16:42.073101    8309 config.go:182] Loaded profile config "bridge-525000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:16:42.073164    8309 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:16:42.073214    8309 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:16:42.075746    8309 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:16:42.082811    8309 start.go:298] selected driver: qemu2
	I0731 04:16:42.082816    8309 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:16:42.082822    8309 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:16:42.084869    8309 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:16:42.086117    8309 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:16:42.088965    8309 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:16:42.088987    8309 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0731 04:16:42.088992    8309 start_flags.go:319] config:
	{Name:kubenet-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:16:42.093401    8309 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:42.101709    8309 out.go:177] * Starting control plane node kubenet-525000 in cluster kubenet-525000
	I0731 04:16:42.105785    8309 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:16:42.105828    8309 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:16:42.105838    8309 cache.go:57] Caching tarball of preloaded images
	I0731 04:16:42.105937    8309 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:16:42.105944    8309 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:16:42.106009    8309 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/kubenet-525000/config.json ...
	I0731 04:16:42.106022    8309 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/kubenet-525000/config.json: {Name:mkdeae78a4b33aeea33453a0478ffbb86f9227f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:16:42.106230    8309 start.go:365] acquiring machines lock for kubenet-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:42.106253    8309 start.go:369] acquired machines lock for "kubenet-525000" in 17.791µs
	I0731 04:16:42.106263    8309 start.go:93] Provisioning new machine with config: &{Name:kubenet-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-5250
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:42.106296    8309 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:42.110764    8309 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:16:42.125183    8309 start.go:159] libmachine.API.Create for "kubenet-525000" (driver="qemu2")
	I0731 04:16:42.125199    8309 client.go:168] LocalClient.Create starting
	I0731 04:16:42.125252    8309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:42.125272    8309 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:42.125279    8309 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:42.125312    8309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:42.125329    8309 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:42.125337    8309 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:42.125646    8309 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:42.291462    8309 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:42.331957    8309 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:42.331966    8309 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:42.332261    8309 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/disk.qcow2
	I0731 04:16:42.351453    8309 main.go:141] libmachine: STDOUT: 
	I0731 04:16:42.351474    8309 main.go:141] libmachine: STDERR: 
	I0731 04:16:42.351525    8309 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/disk.qcow2 +20000M
	I0731 04:16:42.359495    8309 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:42.359512    8309 main.go:141] libmachine: STDERR: 
	I0731 04:16:42.359534    8309 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/disk.qcow2
	I0731 04:16:42.359543    8309 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:42.359581    8309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:47:ec:f7:0c:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/disk.qcow2
	I0731 04:16:42.361362    8309 main.go:141] libmachine: STDOUT: 
	I0731 04:16:42.361379    8309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:42.361396    8309 client.go:171] LocalClient.Create took 236.199084ms
	I0731 04:16:44.363531    8309 start.go:128] duration metric: createHost completed in 2.257261792s
	I0731 04:16:44.363620    8309 start.go:83] releasing machines lock for "kubenet-525000", held for 2.257407625s
	W0731 04:16:44.363716    8309 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:44.380695    8309 out.go:177] * Deleting "kubenet-525000" in qemu2 ...
	W0731 04:16:44.396838    8309 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:44.396865    8309 start.go:687] Will try again in 5 seconds ...
	I0731 04:16:49.398997    8309 start.go:365] acquiring machines lock for kubenet-525000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:49.399511    8309 start.go:369] acquired machines lock for "kubenet-525000" in 396.583µs
	I0731 04:16:49.399650    8309 start.go:93] Provisioning new machine with config: &{Name:kubenet-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-5250
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:49.399968    8309 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:49.405557    8309 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 04:16:49.452154    8309 start.go:159] libmachine.API.Create for "kubenet-525000" (driver="qemu2")
	I0731 04:16:49.452208    8309 client.go:168] LocalClient.Create starting
	I0731 04:16:49.452373    8309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:49.452439    8309 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:49.452457    8309 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:49.452565    8309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:49.452595    8309 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:49.452611    8309 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:49.453108    8309 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:49.584983    8309 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:49.771959    8309 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:49.771965    8309 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:49.772137    8309 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/disk.qcow2
	I0731 04:16:49.781334    8309 main.go:141] libmachine: STDOUT: 
	I0731 04:16:49.781349    8309 main.go:141] libmachine: STDERR: 
	I0731 04:16:49.781402    8309 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/disk.qcow2 +20000M
	I0731 04:16:49.788563    8309 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:49.788583    8309 main.go:141] libmachine: STDERR: 
	I0731 04:16:49.788598    8309 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/disk.qcow2
	I0731 04:16:49.788625    8309 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:49.788660    8309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:df:36:5d:7a:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/kubenet-525000/disk.qcow2
	I0731 04:16:49.790184    8309 main.go:141] libmachine: STDOUT: 
	I0731 04:16:49.790196    8309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:49.790210    8309 client.go:171] LocalClient.Create took 338.001708ms
	I0731 04:16:51.792309    8309 start.go:128] duration metric: createHost completed in 2.392350542s
	I0731 04:16:51.792388    8309 start.go:83] releasing machines lock for "kubenet-525000", held for 2.392882917s
	W0731 04:16:51.792741    8309 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:51.804261    8309 out.go:177] 
	W0731 04:16:51.809346    8309 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:16:51.809386    8309 out.go:239] * 
	* 
	W0731 04:16:51.812213    8309 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:16:51.821231    8309 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-611000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-611000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (10.628029416s)

                                                
                                                
-- stdout --
	* [old-k8s-version-611000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-611000 in cluster old-k8s-version-611000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-611000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:16:43.568041    8391 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:16:43.568173    8391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:43.568176    8391 out.go:309] Setting ErrFile to fd 2...
	I0731 04:16:43.568179    8391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:43.568288    8391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:16:43.569361    8391 out.go:303] Setting JSON to false
	I0731 04:16:43.584678    8391 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9974,"bootTime":1690792229,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:16:43.584764    8391 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:16:43.589778    8391 out.go:177] * [old-k8s-version-611000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:16:43.596755    8391 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:16:43.596793    8391 notify.go:220] Checking for updates...
	I0731 04:16:43.600779    8391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:16:43.604588    8391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:16:43.607713    8391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:16:43.610718    8391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:16:43.613767    8391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:16:43.617102    8391 config.go:182] Loaded profile config "kubenet-525000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:16:43.617170    8391 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:16:43.617226    8391 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:16:43.621666    8391 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:16:43.628724    8391 start.go:298] selected driver: qemu2
	I0731 04:16:43.628732    8391 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:16:43.628739    8391 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:16:43.630597    8391 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:16:43.633748    8391 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:16:43.636850    8391 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:16:43.636872    8391 cni.go:84] Creating CNI manager for ""
	I0731 04:16:43.636881    8391 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 04:16:43.636885    8391 start_flags.go:319] config:
	{Name:old-k8s-version-611000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-611000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:16:43.641197    8391 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:43.649787    8391 out.go:177] * Starting control plane node old-k8s-version-611000 in cluster old-k8s-version-611000
	I0731 04:16:43.653609    8391 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0731 04:16:43.653637    8391 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0731 04:16:43.653650    8391 cache.go:57] Caching tarball of preloaded images
	I0731 04:16:43.653716    8391 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:16:43.653722    8391 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0731 04:16:43.653792    8391 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/old-k8s-version-611000/config.json ...
	I0731 04:16:43.653810    8391 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/old-k8s-version-611000/config.json: {Name:mkfa0af4ea7b5b34744528a9db04d468bde01f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:16:43.654054    8391 start.go:365] acquiring machines lock for old-k8s-version-611000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:44.363776    8391 start.go:369] acquired machines lock for "old-k8s-version-611000" in 709.678583ms
	I0731 04:16:44.363889    8391 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k
8s-version-611000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:44.364125    8391 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:44.373727    8391 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:16:44.419585    8391 start.go:159] libmachine.API.Create for "old-k8s-version-611000" (driver="qemu2")
	I0731 04:16:44.419621    8391 client.go:168] LocalClient.Create starting
	I0731 04:16:44.419752    8391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:44.419812    8391 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:44.419842    8391 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:44.419890    8391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:44.419917    8391 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:44.419933    8391 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:44.420558    8391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:44.551191    8391 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:44.628377    8391 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:44.628384    8391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:44.628536    8391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2
	I0731 04:16:44.637257    8391 main.go:141] libmachine: STDOUT: 
	I0731 04:16:44.637270    8391 main.go:141] libmachine: STDERR: 
	I0731 04:16:44.637324    8391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2 +20000M
	I0731 04:16:44.644500    8391 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:44.644524    8391 main.go:141] libmachine: STDERR: 
	I0731 04:16:44.644544    8391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2
	I0731 04:16:44.644549    8391 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:44.644598    8391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:a0:b6:11:84:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2
	I0731 04:16:44.646157    8391 main.go:141] libmachine: STDOUT: 
	I0731 04:16:44.646172    8391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:44.646190    8391 client.go:171] LocalClient.Create took 226.565334ms
	I0731 04:16:46.648302    8391 start.go:128] duration metric: createHost completed in 2.284206292s
	I0731 04:16:46.648379    8391 start.go:83] releasing machines lock for "old-k8s-version-611000", held for 2.284622042s
	W0731 04:16:46.648476    8391 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:46.660811    8391 out.go:177] * Deleting "old-k8s-version-611000" in qemu2 ...
	W0731 04:16:46.682635    8391 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:46.682664    8391 start.go:687] Will try again in 5 seconds ...
	I0731 04:16:51.684802    8391 start.go:365] acquiring machines lock for old-k8s-version-611000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:51.792496    8391 start.go:369] acquired machines lock for "old-k8s-version-611000" in 107.591459ms
	I0731 04:16:51.792644    8391 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k
8s-version-611000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:51.792917    8391 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:51.801380    8391 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:16:51.847439    8391 start.go:159] libmachine.API.Create for "old-k8s-version-611000" (driver="qemu2")
	I0731 04:16:51.847488    8391 client.go:168] LocalClient.Create starting
	I0731 04:16:51.847669    8391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:51.847741    8391 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:51.847759    8391 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:51.847855    8391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:51.847891    8391 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:51.847906    8391 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:51.848421    8391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:51.978424    8391 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:52.114456    8391 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:52.114471    8391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:52.114655    8391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2
	I0731 04:16:52.124025    8391 main.go:141] libmachine: STDOUT: 
	I0731 04:16:52.124054    8391 main.go:141] libmachine: STDERR: 
	I0731 04:16:52.124134    8391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2 +20000M
	I0731 04:16:52.132334    8391 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:52.132352    8391 main.go:141] libmachine: STDERR: 
	I0731 04:16:52.132372    8391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2
	I0731 04:16:52.132379    8391 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:52.132428    8391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:fb:f0:b9:a4:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2
	I0731 04:16:52.134074    8391 main.go:141] libmachine: STDOUT: 
	I0731 04:16:52.134089    8391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:52.134101    8391 client.go:171] LocalClient.Create took 286.615ms
	I0731 04:16:54.136152    8391 start.go:128] duration metric: createHost completed in 2.343251417s
	I0731 04:16:54.136173    8391 start.go:83] releasing machines lock for "old-k8s-version-611000", held for 2.343710917s
	W0731 04:16:54.136265    8391 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-611000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-611000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:54.145472    8391 out.go:177] 
	W0731 04:16:54.149495    8391 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:16:54.149504    8391 out.go:239] * 
	* 
	W0731 04:16:54.150027    8391 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:16:54.161454    8391 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-611000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000: exit status 7 (33.558583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-611000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-775000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-775000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (9.880829958s)

                                                
                                                
-- stdout --
	* [no-preload-775000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-775000 in cluster no-preload-775000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-775000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:16:53.905366    8505 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:16:53.905500    8505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:53.905502    8505 out.go:309] Setting ErrFile to fd 2...
	I0731 04:16:53.905505    8505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:53.905617    8505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:16:53.906644    8505 out.go:303] Setting JSON to false
	I0731 04:16:53.921972    8505 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9984,"bootTime":1690792229,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:16:53.922056    8505 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:16:53.926965    8505 out.go:177] * [no-preload-775000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:16:53.934928    8505 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:16:53.934967    8505 notify.go:220] Checking for updates...
	I0731 04:16:53.938823    8505 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:16:53.942892    8505 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:16:53.945924    8505 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:16:53.949847    8505 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:16:53.952938    8505 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:16:53.956282    8505 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:16:53.956351    8505 config.go:182] Loaded profile config "old-k8s-version-611000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0731 04:16:53.956396    8505 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:16:53.960856    8505 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:16:53.967935    8505 start.go:298] selected driver: qemu2
	I0731 04:16:53.967950    8505 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:16:53.967958    8505 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:16:53.969994    8505 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:16:53.972831    8505 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:16:53.976001    8505 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:16:53.976023    8505 cni.go:84] Creating CNI manager for ""
	I0731 04:16:53.976039    8505 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:16:53.976042    8505 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:16:53.976049    8505 start_flags.go:319] config:
	{Name:no-preload-775000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:16:53.980198    8505 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:53.986887    8505 out.go:177] * Starting control plane node no-preload-775000 in cluster no-preload-775000
	I0731 04:16:53.990898    8505 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:16:53.991012    8505 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/no-preload-775000/config.json ...
	I0731 04:16:53.991050    8505 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/no-preload-775000/config.json: {Name:mke28450b3a22f4cb9a3e2f01fa4740b221194cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:16:53.991044    8505 cache.go:107] acquiring lock: {Name:mkd965e87299c119b23fe0eb0b9d8acc1778f75e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:53.991069    8505 cache.go:107] acquiring lock: {Name:mkd7662e4adde658701109e86222cd4e00a80ea0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:53.991067    8505 cache.go:107] acquiring lock: {Name:mkf9099aaaa86448656c2c039f37b5b1a6d06004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:53.991137    8505 cache.go:115] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 04:16:53.991143    8505 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 104.667µs
	I0731 04:16:53.991151    8505 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 04:16:53.991161    8505 cache.go:107] acquiring lock: {Name:mkd647942dd11baf002954725ce7599d6d717ed8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:53.991289    8505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0731 04:16:53.991293    8505 cache.go:107] acquiring lock: {Name:mkc8f63a4c9aa70ab12a89534c9c8913e58cfc82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:53.991301    8505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0731 04:16:53.991309    8505 start.go:365] acquiring machines lock for no-preload-775000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:53.991336    8505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0731 04:16:53.991298    8505 cache.go:107] acquiring lock: {Name:mkc4a8611703cfa723270dd2d6a4175a6e745dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:53.991304    8505 cache.go:107] acquiring lock: {Name:mkde5afe32e7f33b70773ca78336f62ae431d648 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:53.991360    8505 cache.go:107] acquiring lock: {Name:mkc4ae4512a782157446ebcd936f186cb4a842b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:53.991517    8505 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0731 04:16:53.991520    8505 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0731 04:16:53.991535    8505 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0731 04:16:53.991686    8505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0731 04:16:53.998309    8505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0731 04:16:53.998328    8505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0731 04:16:53.998350    8505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0731 04:16:53.998461    8505 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0731 04:16:53.998977    8505 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0731 04:16:53.999016    8505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0731 04:16:53.999120    8505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0731 04:16:54.136235    8505 start.go:369] acquired machines lock for "no-preload-775000" in 144.915166ms
	I0731 04:16:54.136272    8505 start.go:93] Provisioning new machine with config: &{Name:no-preload-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preloa
d-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:16:54.136368    8505 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:16:54.145502    8505 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:16:54.159450    8505 start.go:159] libmachine.API.Create for "no-preload-775000" (driver="qemu2")
	I0731 04:16:54.159476    8505 client.go:168] LocalClient.Create starting
	I0731 04:16:54.159550    8505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:16:54.159570    8505 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:54.159582    8505 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:54.159624    8505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:16:54.159638    8505 main.go:141] libmachine: Decoding PEM data...
	I0731 04:16:54.159646    8505 main.go:141] libmachine: Parsing certificate...
	I0731 04:16:54.164973    8505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:16:54.298346    8505 main.go:141] libmachine: Creating SSH key...
	I0731 04:16:54.346924    8505 main.go:141] libmachine: Creating Disk image...
	I0731 04:16:54.346939    8505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:16:54.347090    8505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2
	I0731 04:16:54.358572    8505 main.go:141] libmachine: STDOUT: 
	I0731 04:16:54.358601    8505 main.go:141] libmachine: STDERR: 
	I0731 04:16:54.358662    8505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2 +20000M
	I0731 04:16:54.366839    8505 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:16:54.366858    8505 main.go:141] libmachine: STDERR: 
	I0731 04:16:54.366880    8505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2
	I0731 04:16:54.366886    8505 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:16:54.366922    8505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:49:30:4e:f8:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2
	I0731 04:16:54.368810    8505 main.go:141] libmachine: STDOUT: 
	I0731 04:16:54.368824    8505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:54.368842    8505 client.go:171] LocalClient.Create took 209.364083ms
	I0731 04:16:55.189400    8505 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3
	I0731 04:16:55.202085    8505 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0731 04:16:55.228578    8505 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3
	I0731 04:16:55.380909    8505 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0731 04:16:55.509099    8505 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0731 04:16:55.515161    8505 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0731 04:16:55.515173    8505 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.523940375s
	I0731 04:16:55.515179    8505 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0731 04:16:55.669126    8505 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0731 04:16:55.878638    8505 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3
	I0731 04:16:56.369027    8505 start.go:128] duration metric: createHost completed in 2.232684791s
	I0731 04:16:56.369086    8505 start.go:83] releasing machines lock for "no-preload-775000", held for 2.232874417s
	W0731 04:16:56.369138    8505 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:56.384603    8505 out.go:177] * Deleting "no-preload-775000" in qemu2 ...
	W0731 04:16:56.409225    8505 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:56.409263    8505 start.go:687] Will try again in 5 seconds ...
	I0731 04:16:57.609988    8505 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0731 04:16:57.610036    8505 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3" took 3.618952166s
	I0731 04:16:57.610059    8505 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0731 04:16:58.107904    8505 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0731 04:16:58.107952    8505 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 4.116721708s
	I0731 04:16:58.107981    8505 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0731 04:16:59.216857    8505 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0731 04:16:59.216931    8505 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3" took 5.225999916s
	I0731 04:16:59.216973    8505 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0731 04:16:59.588801    8505 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0731 04:16:59.588850    8505 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3" took 5.597736375s
	I0731 04:16:59.588878    8505 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0731 04:16:59.690431    8505 cache.go:157] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0731 04:16:59.690470    8505 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3" took 5.699550875s
	I0731 04:16:59.690531    8505 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0731 04:17:01.417334    8505 start.go:365] acquiring machines lock for no-preload-775000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:01.426452    8505 start.go:369] acquired machines lock for "no-preload-775000" in 9.060625ms
	I0731 04:17:01.426507    8505 start.go:93] Provisioning new machine with config: &{Name:no-preload-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preloa
d-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:17:01.426719    8505 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:17:01.441020    8505 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:17:01.489421    8505 start.go:159] libmachine.API.Create for "no-preload-775000" (driver="qemu2")
	I0731 04:17:01.489475    8505 client.go:168] LocalClient.Create starting
	I0731 04:17:01.489603    8505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:17:01.489675    8505 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:01.489698    8505 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:01.489772    8505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:17:01.489808    8505 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:01.489839    8505 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:01.490382    8505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:17:01.621809    8505 main.go:141] libmachine: Creating SSH key...
	I0731 04:17:01.698454    8505 main.go:141] libmachine: Creating Disk image...
	I0731 04:17:01.698466    8505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:17:01.698627    8505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2
	I0731 04:17:01.707744    8505 main.go:141] libmachine: STDOUT: 
	I0731 04:17:01.707762    8505 main.go:141] libmachine: STDERR: 
	I0731 04:17:01.707814    8505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2 +20000M
	I0731 04:17:01.715549    8505 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:17:01.715568    8505 main.go:141] libmachine: STDERR: 
	I0731 04:17:01.715583    8505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2
	I0731 04:17:01.715592    8505 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:17:01.715652    8505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:4d:63:54:d6:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2
	I0731 04:17:01.717294    8505 main.go:141] libmachine: STDOUT: 
	I0731 04:17:01.717305    8505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:01.717317    8505 client.go:171] LocalClient.Create took 227.84275ms
	I0731 04:17:03.719269    8505 start.go:128] duration metric: createHost completed in 2.292531917s
	I0731 04:17:03.719322    8505 start.go:83] releasing machines lock for "no-preload-775000", held for 2.2928965s
	W0731 04:17:03.719559    8505 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:03.731169    8505 out.go:177] 
	W0731 04:17:03.735143    8505 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:17:03.735179    8505 out.go:239] * 
	* 
	W0731 04:17:03.738015    8505 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:17:03.746058    8505 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-775000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000: exit status 7 (49.079959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-611000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-611000 create -f testdata/busybox.yaml: exit status 1 (31.342375ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-611000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000: exit status 7 (34.001375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-611000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000: exit status 7 (33.698792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-611000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-611000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-611000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-611000 describe deploy/metrics-server -n kube-system: exit status 1 (27.971167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-611000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-611000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000: exit status 7 (28.138958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-611000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (6.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-611000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
E0731 04:16:57.024238    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-611000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (6.906004s)

                                                
                                                
-- stdout --
	* [old-k8s-version-611000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-611000 in cluster old-k8s-version-611000
	* Restarting existing qemu2 VM for "old-k8s-version-611000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-611000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:16:54.590469    8572 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:16:54.590582    8572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:54.590585    8572 out.go:309] Setting ErrFile to fd 2...
	I0731 04:16:54.590587    8572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:16:54.590729    8572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:16:54.591804    8572 out.go:303] Setting JSON to false
	I0731 04:16:54.607201    8572 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9985,"bootTime":1690792229,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:16:54.607268    8572 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:16:54.611433    8572 out.go:177] * [old-k8s-version-611000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:16:54.618473    8572 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:16:54.618546    8572 notify.go:220] Checking for updates...
	I0731 04:16:54.625395    8572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:16:54.632509    8572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:16:54.640449    8572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:16:54.643475    8572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:16:54.647516    8572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:16:54.659120    8572 config.go:182] Loaded profile config "old-k8s-version-611000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0731 04:16:54.663408    8572 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0731 04:16:54.666344    8572 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:16:54.670414    8572 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 04:16:54.676416    8572 start.go:298] selected driver: qemu2
	I0731 04:16:54.676422    8572 start.go:898] validating driver "qemu2" against &{Name:old-k8s-version-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-
version-611000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:16:54.676485    8572 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:16:54.678292    8572 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:16:54.678319    8572 cni.go:84] Creating CNI manager for ""
	I0731 04:16:54.678327    8572 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 04:16:54.678333    8572 start_flags.go:319] config:
	{Name:old-k8s-version-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-611000 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:16:54.681980    8572 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:16:54.690423    8572 out.go:177] * Starting control plane node old-k8s-version-611000 in cluster old-k8s-version-611000
	I0731 04:16:54.694422    8572 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0731 04:16:54.694452    8572 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0731 04:16:54.694463    8572 cache.go:57] Caching tarball of preloaded images
	I0731 04:16:54.694533    8572 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:16:54.694541    8572 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0731 04:16:54.694602    8572 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/old-k8s-version-611000/config.json ...
	I0731 04:16:54.694896    8572 start.go:365] acquiring machines lock for old-k8s-version-611000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:16:56.369243    8572 start.go:369] acquired machines lock for "old-k8s-version-611000" in 1.674344541s
	I0731 04:16:56.369341    8572 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:16:56.369374    8572 fix.go:54] fixHost starting: 
	I0731 04:16:56.370031    8572 fix.go:102] recreateIfNeeded on old-k8s-version-611000: state=Stopped err=<nil>
	W0731 04:16:56.370074    8572 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:16:56.380607    8572 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-611000" ...
	I0731 04:16:56.388876    8572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:fb:f0:b9:a4:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2
	I0731 04:16:56.399947    8572 main.go:141] libmachine: STDOUT: 
	I0731 04:16:56.400016    8572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:16:56.400152    8572 fix.go:56] fixHost completed within 30.789458ms
	I0731 04:16:56.400172    8572 start.go:83] releasing machines lock for "old-k8s-version-611000", held for 30.895125ms
	W0731 04:16:56.400210    8572 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:16:56.400378    8572 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:16:56.400394    8572 start.go:687] Will try again in 5 seconds ...
	I0731 04:17:01.402067    8572 start.go:365] acquiring machines lock for old-k8s-version-611000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:01.402569    8572 start.go:369] acquired machines lock for "old-k8s-version-611000" in 396.792µs
	I0731 04:17:01.402731    8572 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:17:01.402750    8572 fix.go:54] fixHost starting: 
	I0731 04:17:01.403611    8572 fix.go:102] recreateIfNeeded on old-k8s-version-611000: state=Stopped err=<nil>
	W0731 04:17:01.403639    8572 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:17:01.409124    8572 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-611000" ...
	I0731 04:17:01.417089    8572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:fb:f0:b9:a4:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/old-k8s-version-611000/disk.qcow2
	I0731 04:17:01.426200    8572 main.go:141] libmachine: STDOUT: 
	I0731 04:17:01.426261    8572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:01.426357    8572 fix.go:56] fixHost completed within 23.606667ms
	I0731 04:17:01.426381    8572 start.go:83] releasing machines lock for "old-k8s-version-611000", held for 23.785833ms
	W0731 04:17:01.426631    8572 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-611000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-611000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:01.445090    8572 out.go:177] 
	W0731 04:17:01.448164    8572 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:17:01.448213    8572 out.go:239] * 
	* 
	W0731 04:17:01.450683    8572 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:17:01.459042    8572 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-611000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000: exit status 7 (51.07575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-611000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (6.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-611000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000: exit status 7 (34.580167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-611000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-611000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-611000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-611000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.612417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-611000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-611000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000: exit status 7 (32.728667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-611000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-611000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-611000 "sudo crictl images -o json": exit status 89 (40.279333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-611000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-611000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-611000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000: exit status 7 (28.007958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-611000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-611000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-611000 --alsologtostderr -v=1: exit status 89 (43.614583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-611000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:17:01.714303    8654 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:17:01.714623    8654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:01.714627    8654 out.go:309] Setting ErrFile to fd 2...
	I0731 04:17:01.714630    8654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:01.714746    8654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:17:01.714939    8654 out.go:303] Setting JSON to false
	I0731 04:17:01.714949    8654 mustload.go:65] Loading cluster: old-k8s-version-611000
	I0731 04:17:01.715110    8654 config.go:182] Loaded profile config "old-k8s-version-611000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0731 04:17:01.719039    8654 out.go:177] * The control plane node must be running for this command
	I0731 04:17:01.726019    8654 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-611000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-611000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000: exit status 7 (27.373333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-611000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000: exit status 7 (27.722375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-611000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-775000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-775000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (11.254167208s)

                                                
                                                
-- stdout --
	* [embed-certs-775000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-775000 in cluster embed-certs-775000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-775000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:17:02.162587    8680 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:17:02.162698    8680 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:02.162700    8680 out.go:309] Setting ErrFile to fd 2...
	I0731 04:17:02.162702    8680 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:02.162807    8680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:17:02.163916    8680 out.go:303] Setting JSON to false
	I0731 04:17:02.178997    8680 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9993,"bootTime":1690792229,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:17:02.179057    8680 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:17:02.183911    8680 out.go:177] * [embed-certs-775000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:17:02.194865    8680 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:17:02.190839    8680 notify.go:220] Checking for updates...
	I0731 04:17:02.202848    8680 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:17:02.209913    8680 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:17:02.216867    8680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:17:02.224808    8680 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:17:02.232885    8680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:17:02.237146    8680 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:02.237213    8680 config.go:182] Loaded profile config "no-preload-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:02.237251    8680 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:17:02.239888    8680 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:17:02.246857    8680 start.go:298] selected driver: qemu2
	I0731 04:17:02.246861    8680 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:17:02.246867    8680 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:17:02.248813    8680 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:17:02.252891    8680 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:17:02.254340    8680 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:17:02.254360    8680 cni.go:84] Creating CNI manager for ""
	I0731 04:17:02.254369    8680 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:17:02.254375    8680 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:17:02.254381    8680 start_flags.go:319] config:
	{Name:embed-certs-775000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:17:02.258781    8680 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:02.266873    8680 out.go:177] * Starting control plane node embed-certs-775000 in cluster embed-certs-775000
	I0731 04:17:02.270862    8680 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:17:02.270897    8680 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:17:02.270908    8680 cache.go:57] Caching tarball of preloaded images
	I0731 04:17:02.270975    8680 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:17:02.270980    8680 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:17:02.271032    8680 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/embed-certs-775000/config.json ...
	I0731 04:17:02.271044    8680 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/embed-certs-775000/config.json: {Name:mk6ae6d6dc0206af935c936e8537434cdf4659af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:17:02.271236    8680 start.go:365] acquiring machines lock for embed-certs-775000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:03.719526    8680 start.go:369] acquired machines lock for "embed-certs-775000" in 1.448283625s
	I0731 04:17:03.719719    8680 start.go:93] Provisioning new machine with config: &{Name:embed-certs-775000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-cer
ts-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:17:03.719919    8680 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:17:03.728056    8680 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:17:03.775911    8680 start.go:159] libmachine.API.Create for "embed-certs-775000" (driver="qemu2")
	I0731 04:17:03.775970    8680 client.go:168] LocalClient.Create starting
	I0731 04:17:03.776121    8680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:17:03.776179    8680 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:03.776206    8680 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:03.776310    8680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:17:03.776343    8680 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:03.776364    8680 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:03.776907    8680 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:17:03.912577    8680 main.go:141] libmachine: Creating SSH key...
	I0731 04:17:04.025327    8680 main.go:141] libmachine: Creating Disk image...
	I0731 04:17:04.025335    8680 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:17:04.025478    8680 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2
	I0731 04:17:04.034453    8680 main.go:141] libmachine: STDOUT: 
	I0731 04:17:04.034506    8680 main.go:141] libmachine: STDERR: 
	I0731 04:17:04.034585    8680 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2 +20000M
	I0731 04:17:04.042958    8680 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:17:04.042977    8680 main.go:141] libmachine: STDERR: 
	I0731 04:17:04.043004    8680 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2
	I0731 04:17:04.043012    8680 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:17:04.043052    8680 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:a6:3d:54:20:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2
	I0731 04:17:04.044886    8680 main.go:141] libmachine: STDOUT: 
	I0731 04:17:04.044898    8680 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:04.044917    8680 client.go:171] LocalClient.Create took 268.945458ms
	I0731 04:17:06.047252    8680 start.go:128] duration metric: createHost completed in 2.3272645s
	I0731 04:17:06.047343    8680 start.go:83] releasing machines lock for "embed-certs-775000", held for 2.32783175s
	W0731 04:17:06.047406    8680 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:06.065871    8680 out.go:177] * Deleting "embed-certs-775000" in qemu2 ...
	W0731 04:17:06.090468    8680 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:06.090499    8680 start.go:687] Will try again in 5 seconds ...
	I0731 04:17:11.092043    8680 start.go:365] acquiring machines lock for embed-certs-775000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:11.106062    8680 start.go:369] acquired machines lock for "embed-certs-775000" in 13.947042ms
	I0731 04:17:11.106136    8680 start.go:93] Provisioning new machine with config: &{Name:embed-certs-775000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-cer
ts-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:17:11.106367    8680 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:17:11.113782    8680 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:17:11.155449    8680 start.go:159] libmachine.API.Create for "embed-certs-775000" (driver="qemu2")
	I0731 04:17:11.155520    8680 client.go:168] LocalClient.Create starting
	I0731 04:17:11.155650    8680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:17:11.155697    8680 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:11.155713    8680 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:11.155826    8680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:17:11.155854    8680 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:11.155865    8680 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:11.156329    8680 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:17:11.286470    8680 main.go:141] libmachine: Creating SSH key...
	I0731 04:17:11.323177    8680 main.go:141] libmachine: Creating Disk image...
	I0731 04:17:11.323185    8680 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:17:11.323339    8680 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2
	I0731 04:17:11.333482    8680 main.go:141] libmachine: STDOUT: 
	I0731 04:17:11.333495    8680 main.go:141] libmachine: STDERR: 
	I0731 04:17:11.333585    8680 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2 +20000M
	I0731 04:17:11.341176    8680 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:17:11.341192    8680 main.go:141] libmachine: STDERR: 
	I0731 04:17:11.341205    8680 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2
	I0731 04:17:11.341210    8680 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:17:11.341247    8680 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:f7:e0:6b:e4:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2
	I0731 04:17:11.342965    8680 main.go:141] libmachine: STDOUT: 
	I0731 04:17:11.342978    8680 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:11.342991    8680 client.go:171] LocalClient.Create took 187.46975ms
	I0731 04:17:13.345129    8680 start.go:128] duration metric: createHost completed in 2.238781375s
	I0731 04:17:13.345228    8680 start.go:83] releasing machines lock for "embed-certs-775000", held for 2.239180417s
	W0731 04:17:13.345653    8680 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:13.363633    8680 out.go:177] 
	W0731 04:17:13.367844    8680 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:17:13.367881    8680 out.go:239] * 
	* 
	W0731 04:17:13.370619    8680 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:17:13.378620    8680 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-775000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000: exit status 7 (55.502375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-775000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-775000 create -f testdata/busybox.yaml: exit status 1 (30.210125ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-775000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000: exit status 7 (32.574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-775000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000: exit status 7 (33.402042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-775000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-775000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-775000 describe deploy/metrics-server -n kube-system: exit status 1 (27.02425ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-775000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-775000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000: exit status 7 (28.919417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-775000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-775000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (6.973300167s)

                                                
                                                
-- stdout --
	* [no-preload-775000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-775000 in cluster no-preload-775000
	* Restarting existing qemu2 VM for "no-preload-775000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-775000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:17:04.200315    8708 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:17:04.200423    8708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:04.200427    8708 out.go:309] Setting ErrFile to fd 2...
	I0731 04:17:04.200436    8708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:04.200555    8708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:17:04.201558    8708 out.go:303] Setting JSON to false
	I0731 04:17:04.216720    8708 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9995,"bootTime":1690792229,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:17:04.216796    8708 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:17:04.222031    8708 out.go:177] * [no-preload-775000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:17:04.228966    8708 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:17:04.232937    8708 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:17:04.229034    8708 notify.go:220] Checking for updates...
	I0731 04:17:04.235936    8708 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:17:04.238995    8708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:17:04.241956    8708 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:17:04.243240    8708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:17:04.246268    8708 config.go:182] Loaded profile config "no-preload-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:04.246549    8708 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:17:04.250924    8708 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 04:17:04.255915    8708 start.go:298] selected driver: qemu2
	I0731 04:17:04.255919    8708 start.go:898] validating driver "qemu2" against &{Name:no-preload-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-7
75000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:17:04.255967    8708 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:17:04.257822    8708 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:17:04.257851    8708 cni.go:84] Creating CNI manager for ""
	I0731 04:17:04.257859    8708 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:17:04.257864    8708 start_flags.go:319] config:
	{Name:no-preload-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:17:04.261890    8708 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:04.268864    8708 out.go:177] * Starting control plane node no-preload-775000 in cluster no-preload-775000
	I0731 04:17:04.272978    8708 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:17:04.273050    8708 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/no-preload-775000/config.json ...
	I0731 04:17:04.273112    8708 cache.go:107] acquiring lock: {Name:mkc4a8611703cfa723270dd2d6a4175a6e745dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:04.273116    8708 cache.go:107] acquiring lock: {Name:mkd647942dd11baf002954725ce7599d6d717ed8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:04.273111    8708 cache.go:107] acquiring lock: {Name:mkd965e87299c119b23fe0eb0b9d8acc1778f75e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:04.273142    8708 cache.go:107] acquiring lock: {Name:mkc8f63a4c9aa70ab12a89534c9c8913e58cfc82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:04.273174    8708 cache.go:107] acquiring lock: {Name:mkf9099aaaa86448656c2c039f37b5b1a6d06004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:04.273196    8708 cache.go:115] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 04:17:04.273205    8708 cache.go:115] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0731 04:17:04.273204    8708 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 94.125µs
	I0731 04:17:04.273226    8708 cache.go:107] acquiring lock: {Name:mkc4ae4512a782157446ebcd936f186cb4a842b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:04.273235    8708 cache.go:115] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0731 04:17:04.273240    8708 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3" took 67.75µs
	I0731 04:17:04.273272    8708 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0731 04:17:04.273273    8708 cache.go:115] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0731 04:17:04.273279    8708 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 53.792µs
	I0731 04:17:04.273283    8708 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0731 04:17:04.273288    8708 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 04:17:04.273210    8708 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 69.25µs
	I0731 04:17:04.273297    8708 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0731 04:17:04.273313    8708 cache.go:115] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0731 04:17:04.273317    8708 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3" took 221.167µs
	I0731 04:17:04.273321    8708 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0731 04:17:04.273318    8708 cache.go:107] acquiring lock: {Name:mkd7662e4adde658701109e86222cd4e00a80ea0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:04.273353    8708 start.go:365] acquiring machines lock for no-preload-775000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:04.273350    8708 cache.go:107] acquiring lock: {Name:mkde5afe32e7f33b70773ca78336f62ae431d648 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:04.273345    8708 cache.go:115] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0731 04:17:04.273375    8708 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3" took 270.375µs
	I0731 04:17:04.273384    8708 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0731 04:17:04.273436    8708 cache.go:115] /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0731 04:17:04.273440    8708 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3" took 165.75µs
	I0731 04:17:04.273444    8708 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0731 04:17:04.273439    8708 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0731 04:17:04.278496    8708 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0731 04:17:05.296256    8708 cache.go:162] opening:  /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0731 04:17:06.047492    8708 start.go:369] acquired machines lock for "no-preload-775000" in 1.774155792s
	I0731 04:17:06.047633    8708 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:17:06.047665    8708 fix.go:54] fixHost starting: 
	I0731 04:17:06.048318    8708 fix.go:102] recreateIfNeeded on no-preload-775000: state=Stopped err=<nil>
	W0731 04:17:06.048352    8708 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:17:06.057883    8708 out.go:177] * Restarting existing qemu2 VM for "no-preload-775000" ...
	I0731 04:17:06.070086    8708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:4d:63:54:d6:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2
	I0731 04:17:06.080924    8708 main.go:141] libmachine: STDOUT: 
	I0731 04:17:06.080999    8708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:06.081106    8708 fix.go:56] fixHost completed within 33.455209ms
	I0731 04:17:06.081134    8708 start.go:83] releasing machines lock for "no-preload-775000", held for 33.610042ms
	W0731 04:17:06.081171    8708 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:17:06.081354    8708 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:06.081368    8708 start.go:687] Will try again in 5 seconds ...
	I0731 04:17:11.081885    8708 start.go:365] acquiring machines lock for no-preload-775000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:11.082288    8708 start.go:369] acquired machines lock for "no-preload-775000" in 319.542µs
	I0731 04:17:11.082450    8708 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:17:11.082471    8708 fix.go:54] fixHost starting: 
	I0731 04:17:11.083178    8708 fix.go:102] recreateIfNeeded on no-preload-775000: state=Stopped err=<nil>
	W0731 04:17:11.083207    8708 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:17:11.091833    8708 out.go:177] * Restarting existing qemu2 VM for "no-preload-775000" ...
	I0731 04:17:11.095987    8708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:4d:63:54:d6:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/no-preload-775000/disk.qcow2
	I0731 04:17:11.105850    8708 main.go:141] libmachine: STDOUT: 
	I0731 04:17:11.105898    8708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:11.105980    8708 fix.go:56] fixHost completed within 23.511167ms
	I0731 04:17:11.105997    8708 start.go:83] releasing machines lock for "no-preload-775000", held for 23.689833ms
	W0731 04:17:11.106161    8708 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-775000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-775000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:11.121840    8708 out.go:177] 
	W0731 04:17:11.124849    8708 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:17:11.124867    8708 out.go:239] * 
	* 
	W0731 04:17:11.126596    8708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:17:11.136752    8708 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-775000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000: exit status 7 (43.407083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-775000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000: exit status 7 (32.082541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-775000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-775000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-775000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.635292ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-775000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-775000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000: exit status 7 (31.9735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-775000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-775000 "sudo crictl images -o json": exit status 89 (46.063167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-775000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-775000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-775000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000: exit status 7 (27.737542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-775000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-775000 --alsologtostderr -v=1: exit status 89 (39.788291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-775000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:17:11.387235    8745 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:17:11.387351    8745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:11.387354    8745 out.go:309] Setting ErrFile to fd 2...
	I0731 04:17:11.387357    8745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:11.387469    8745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:17:11.387677    8745 out.go:303] Setting JSON to false
	I0731 04:17:11.387685    8745 mustload.go:65] Loading cluster: no-preload-775000
	I0731 04:17:11.387850    8745 config.go:182] Loaded profile config "no-preload-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:11.392805    8745 out.go:177] * The control plane node must be running for this command
	I0731 04:17:11.396843    8745 out.go:177]   To start a cluster, run: "minikube start -p no-preload-775000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-775000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000: exit status 7 (27.506167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-775000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000: exit status 7 (27.503833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-127000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-127000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (11.070973333s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-127000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-127000 in cluster default-k8s-diff-port-127000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-127000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:17:12.090003    8780 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:17:12.090099    8780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:12.090105    8780 out.go:309] Setting ErrFile to fd 2...
	I0731 04:17:12.090107    8780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:12.090218    8780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:17:12.091288    8780 out.go:303] Setting JSON to false
	I0731 04:17:12.106615    8780 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10003,"bootTime":1690792229,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:17:12.106681    8780 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:17:12.110218    8780 out.go:177] * [default-k8s-diff-port-127000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:17:12.119166    8780 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:17:12.122950    8780 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:17:12.119223    8780 notify.go:220] Checking for updates...
	I0731 04:17:12.131078    8780 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:17:12.135062    8780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:17:12.138090    8780 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:17:12.142125    8780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:17:12.145417    8780 config.go:182] Loaded profile config "embed-certs-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:12.145479    8780 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:12.145522    8780 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:17:12.149052    8780 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:17:12.156021    8780 start.go:298] selected driver: qemu2
	I0731 04:17:12.156026    8780 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:17:12.156032    8780 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:17:12.158041    8780 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 04:17:12.161072    8780 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:17:12.164213    8780 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:17:12.164240    8780 cni.go:84] Creating CNI manager for ""
	I0731 04:17:12.164247    8780 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:17:12.164251    8780 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:17:12.164261    8780 start_flags.go:319] config:
	{Name:default-k8s-diff-port-127000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-127000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAg
entPID:0}
	I0731 04:17:12.168672    8780 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:12.177102    8780 out.go:177] * Starting control plane node default-k8s-diff-port-127000 in cluster default-k8s-diff-port-127000
	I0731 04:17:12.181096    8780 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:17:12.181122    8780 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:17:12.181138    8780 cache.go:57] Caching tarball of preloaded images
	I0731 04:17:12.181210    8780 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:17:12.181216    8780 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:17:12.181284    8780 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/default-k8s-diff-port-127000/config.json ...
	I0731 04:17:12.181298    8780 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/default-k8s-diff-port-127000/config.json: {Name:mkc2d4af44d9b7bd1b96f2affb43a6ea6c5c146d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:17:12.181505    8780 start.go:365] acquiring machines lock for default-k8s-diff-port-127000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:13.345414    8780 start.go:369] acquired machines lock for "default-k8s-diff-port-127000" in 1.163890125s
	I0731 04:17:13.345652    8780 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterNam
e:default-k8s-diff-port-127000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:17:13.345918    8780 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:17:13.359465    8780 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:17:13.409051    8780 start.go:159] libmachine.API.Create for "default-k8s-diff-port-127000" (driver="qemu2")
	I0731 04:17:13.409099    8780 client.go:168] LocalClient.Create starting
	I0731 04:17:13.409276    8780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:17:13.409323    8780 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:13.409352    8780 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:13.409439    8780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:17:13.409469    8780 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:13.409488    8780 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:13.410073    8780 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:17:13.556160    8780 main.go:141] libmachine: Creating SSH key...
	I0731 04:17:13.654820    8780 main.go:141] libmachine: Creating Disk image...
	I0731 04:17:13.654829    8780 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:17:13.654994    8780 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2
	I0731 04:17:13.663965    8780 main.go:141] libmachine: STDOUT: 
	I0731 04:17:13.663981    8780 main.go:141] libmachine: STDERR: 
	I0731 04:17:13.664035    8780 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2 +20000M
	I0731 04:17:13.671692    8780 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:17:13.671708    8780 main.go:141] libmachine: STDERR: 
	I0731 04:17:13.671727    8780 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2
	I0731 04:17:13.671739    8780 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:17:13.671776    8780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:d7:96:97:a3:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2
	I0731 04:17:13.673587    8780 main.go:141] libmachine: STDOUT: 
	I0731 04:17:13.673602    8780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:13.673621    8780 client.go:171] LocalClient.Create took 264.507125ms
	I0731 04:17:15.675759    8780 start.go:128] duration metric: createHost completed in 2.329854709s
	I0731 04:17:15.675851    8780 start.go:83] releasing machines lock for "default-k8s-diff-port-127000", held for 2.330434125s
	W0731 04:17:15.675948    8780 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:15.694563    8780 out.go:177] * Deleting "default-k8s-diff-port-127000" in qemu2 ...
	W0731 04:17:15.719685    8780 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:15.719722    8780 start.go:687] Will try again in 5 seconds ...
	I0731 04:17:20.721861    8780 start.go:365] acquiring machines lock for default-k8s-diff-port-127000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:20.736703    8780 start.go:369] acquired machines lock for "default-k8s-diff-port-127000" in 14.770292ms
	I0731 04:17:20.736775    8780 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterNam
e:default-k8s-diff-port-127000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:17:20.737026    8780 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:17:20.745931    8780 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:17:20.789093    8780 start.go:159] libmachine.API.Create for "default-k8s-diff-port-127000" (driver="qemu2")
	I0731 04:17:20.789127    8780 client.go:168] LocalClient.Create starting
	I0731 04:17:20.789260    8780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:17:20.789308    8780 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:20.789334    8780 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:20.789430    8780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:17:20.789458    8780 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:20.789483    8780 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:20.789982    8780 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:17:20.922898    8780 main.go:141] libmachine: Creating SSH key...
	I0731 04:17:21.073596    8780 main.go:141] libmachine: Creating Disk image...
	I0731 04:17:21.073606    8780 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:17:21.073800    8780 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2
	I0731 04:17:21.082968    8780 main.go:141] libmachine: STDOUT: 
	I0731 04:17:21.082987    8780 main.go:141] libmachine: STDERR: 
	I0731 04:17:21.083046    8780 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2 +20000M
	I0731 04:17:21.091184    8780 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:17:21.091212    8780 main.go:141] libmachine: STDERR: 
	I0731 04:17:21.091238    8780 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2
	I0731 04:17:21.091243    8780 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:17:21.091281    8780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:eb:c0:99:d1:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2
	I0731 04:17:21.093079    8780 main.go:141] libmachine: STDOUT: 
	I0731 04:17:21.093094    8780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:21.093108    8780 client.go:171] LocalClient.Create took 303.983792ms
	I0731 04:17:23.095285    8780 start.go:128] duration metric: createHost completed in 2.358250583s
	I0731 04:17:23.095369    8780 start.go:83] releasing machines lock for "default-k8s-diff-port-127000", held for 2.358691833s
	W0731 04:17:23.095760    8780 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-127000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-127000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:23.108317    8780 out.go:177] 
	W0731 04:17:23.111491    8780 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:17:23.111559    8780 out.go:239] * 
	* 
	W0731 04:17:23.114376    8780 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:17:23.123415    8780 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-127000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000: exit status 7 (49.848167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-775000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-775000 create -f testdata/busybox.yaml: exit status 1 (30.040375ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-775000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000: exit status 7 (30.9125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-775000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000: exit status 7 (31.370917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-775000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-775000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-775000 describe deploy/metrics-server -n kube-system: exit status 1 (26.029583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-775000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-775000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000: exit status 7 (26.989958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-775000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-775000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (6.968528209s)

                                                
                                                
-- stdout --
	* [embed-certs-775000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-775000 in cluster embed-certs-775000
	* Restarting existing qemu2 VM for "embed-certs-775000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-775000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:17:13.833440    8818 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:17:13.833543    8818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:13.833545    8818 out.go:309] Setting ErrFile to fd 2...
	I0731 04:17:13.833548    8818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:13.833681    8818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:17:13.834764    8818 out.go:303] Setting JSON to false
	I0731 04:17:13.849625    8818 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10004,"bootTime":1690792229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:17:13.849699    8818 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:17:13.854536    8818 out.go:177] * [embed-certs-775000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:17:13.862560    8818 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:17:13.858578    8818 notify.go:220] Checking for updates...
	I0731 04:17:13.869527    8818 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:17:13.872580    8818 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:17:13.876527    8818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:17:13.880527    8818 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:17:13.884529    8818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:17:13.888842    8818 config.go:182] Loaded profile config "embed-certs-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:13.889094    8818 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:17:13.893498    8818 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 04:17:13.900545    8818 start.go:298] selected driver: qemu2
	I0731 04:17:13.900550    8818 start.go:898] validating driver "qemu2" against &{Name:embed-certs-775000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-
775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:17:13.900628    8818 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:17:13.902534    8818 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:17:13.902560    8818 cni.go:84] Creating CNI manager for ""
	I0731 04:17:13.902567    8818 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:17:13.902572    8818 start_flags.go:319] config:
	{Name:embed-certs-775000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:17:13.906633    8818 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:13.913567    8818 out.go:177] * Starting control plane node embed-certs-775000 in cluster embed-certs-775000
	I0731 04:17:13.917471    8818 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:17:13.917487    8818 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:17:13.917495    8818 cache.go:57] Caching tarball of preloaded images
	I0731 04:17:13.917544    8818 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:17:13.917549    8818 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:17:13.917604    8818 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/embed-certs-775000/config.json ...
	I0731 04:17:13.917984    8818 start.go:365] acquiring machines lock for embed-certs-775000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:15.676029    8818 start.go:369] acquired machines lock for "embed-certs-775000" in 1.75801725s
	I0731 04:17:15.676161    8818 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:17:15.676194    8818 fix.go:54] fixHost starting: 
	I0731 04:17:15.676961    8818 fix.go:102] recreateIfNeeded on embed-certs-775000: state=Stopped err=<nil>
	W0731 04:17:15.677002    8818 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:17:15.687619    8818 out.go:177] * Restarting existing qemu2 VM for "embed-certs-775000" ...
	I0731 04:17:15.697849    8818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:f7:e0:6b:e4:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2
	I0731 04:17:15.708294    8818 main.go:141] libmachine: STDOUT: 
	I0731 04:17:15.708449    8818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:15.708569    8818 fix.go:56] fixHost completed within 32.381333ms
	I0731 04:17:15.708594    8818 start.go:83] releasing machines lock for "embed-certs-775000", held for 32.535417ms
	W0731 04:17:15.708636    8818 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:17:15.708805    8818 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:15.708824    8818 start.go:687] Will try again in 5 seconds ...
	I0731 04:17:20.710955    8818 start.go:365] acquiring machines lock for embed-certs-775000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:20.711473    8818 start.go:369] acquired machines lock for "embed-certs-775000" in 389.833µs
	I0731 04:17:20.711699    8818 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:17:20.711719    8818 fix.go:54] fixHost starting: 
	I0731 04:17:20.712512    8818 fix.go:102] recreateIfNeeded on embed-certs-775000: state=Stopped err=<nil>
	W0731 04:17:20.712543    8818 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:17:20.723956    8818 out.go:177] * Restarting existing qemu2 VM for "embed-certs-775000" ...
	I0731 04:17:20.727069    8818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:f7:e0:6b:e4:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/embed-certs-775000/disk.qcow2
	I0731 04:17:20.736438    8818 main.go:141] libmachine: STDOUT: 
	I0731 04:17:20.736501    8818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:20.736605    8818 fix.go:56] fixHost completed within 24.887042ms
	I0731 04:17:20.736631    8818 start.go:83] releasing machines lock for "embed-certs-775000", held for 25.123625ms
	W0731 04:17:20.736813    8818 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-775000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-775000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:20.752956    8818 out.go:177] 
	W0731 04:17:20.755917    8818 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:17:20.755967    8818 out.go:239] * 
	* 
	W0731 04:17:20.757939    8818 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:17:20.767914    8818 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-775000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000: exit status 7 (46.07375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-775000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000: exit status 7 (33.994709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-775000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-775000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-775000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.489208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-775000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-775000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000: exit status 7 (32.726542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-775000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-775000 "sudo crictl images -o json": exit status 89 (39.626084ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-775000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-775000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-775000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000: exit status 7 (28.678666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-775000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-775000 --alsologtostderr -v=1: exit status 89 (40.014875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-775000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:17:21.016937    8838 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:17:21.017063    8838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:21.017067    8838 out.go:309] Setting ErrFile to fd 2...
	I0731 04:17:21.017070    8838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:21.017187    8838 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:17:21.017405    8838 out.go:303] Setting JSON to false
	I0731 04:17:21.017416    8838 mustload.go:65] Loading cluster: embed-certs-775000
	I0731 04:17:21.017601    8838 config.go:182] Loaded profile config "embed-certs-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:21.021948    8838 out.go:177] * The control plane node must be running for this command
	I0731 04:17:21.024960    8838 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-775000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-775000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000: exit status 7 (28.812333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-775000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000: exit status 7 (28.50825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-000000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-000000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (11.288265s)

                                                
                                                
-- stdout --
	* [newest-cni-000000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-000000 in cluster newest-cni-000000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-000000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:17:21.472738    8864 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:17:21.472855    8864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:21.472858    8864 out.go:309] Setting ErrFile to fd 2...
	I0731 04:17:21.472861    8864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:21.472977    8864 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:17:21.473980    8864 out.go:303] Setting JSON to false
	I0731 04:17:21.489081    8864 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10012,"bootTime":1690792229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:17:21.489167    8864 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:17:21.494271    8864 out.go:177] * [newest-cni-000000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:17:21.497232    8864 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:17:21.501179    8864 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:17:21.497314    8864 notify.go:220] Checking for updates...
	I0731 04:17:21.505285    8864 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:17:21.508211    8864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:17:21.511204    8864 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:17:21.514195    8864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:17:21.517472    8864 config.go:182] Loaded profile config "default-k8s-diff-port-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:21.517535    8864 config.go:182] Loaded profile config "multinode-151000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:21.517576    8864 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:17:21.522135    8864 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 04:17:21.529162    8864 start.go:298] selected driver: qemu2
	I0731 04:17:21.529166    8864 start.go:898] validating driver "qemu2" against <nil>
	I0731 04:17:21.529174    8864 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:17:21.530967    8864 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0731 04:17:21.530988    8864 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0731 04:17:21.539167    8864 out.go:177] * Automatically selected the socket_vmnet network
	I0731 04:17:21.542258    8864 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 04:17:21.542277    8864 cni.go:84] Creating CNI manager for ""
	I0731 04:17:21.542292    8864 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:17:21.542296    8864 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 04:17:21.542303    8864 start_flags.go:319] config:
	{Name:newest-cni-000000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-000000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Socke
tVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:17:21.546407    8864 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:21.553161    8864 out.go:177] * Starting control plane node newest-cni-000000 in cluster newest-cni-000000
	I0731 04:17:21.557164    8864 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:17:21.557194    8864 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:17:21.557211    8864 cache.go:57] Caching tarball of preloaded images
	I0731 04:17:21.557270    8864 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:17:21.557275    8864 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:17:21.557339    8864 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/newest-cni-000000/config.json ...
	I0731 04:17:21.557351    8864 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/newest-cni-000000/config.json: {Name:mk12ad476a3dbfa6cba119a0fd55359a89554aa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 04:17:21.557563    8864 start.go:365] acquiring machines lock for newest-cni-000000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:23.095514    8864 start.go:369] acquired machines lock for "newest-cni-000000" in 1.537957584s
	I0731 04:17:23.095769    8864 start.go:93] Provisioning new machine with config: &{Name:newest-cni-000000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cn
i-000000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:17:23.096009    8864 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:17:23.104274    8864 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:17:23.152635    8864 start.go:159] libmachine.API.Create for "newest-cni-000000" (driver="qemu2")
	I0731 04:17:23.152701    8864 client.go:168] LocalClient.Create starting
	I0731 04:17:23.152848    8864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:17:23.152890    8864 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:23.152916    8864 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:23.152990    8864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:17:23.153019    8864 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:23.153040    8864 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:23.153628    8864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:17:23.285198    8864 main.go:141] libmachine: Creating SSH key...
	I0731 04:17:23.325054    8864 main.go:141] libmachine: Creating Disk image...
	I0731 04:17:23.325060    8864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:17:23.325186    8864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2
	I0731 04:17:23.333886    8864 main.go:141] libmachine: STDOUT: 
	I0731 04:17:23.333901    8864 main.go:141] libmachine: STDERR: 
	I0731 04:17:23.333965    8864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2 +20000M
	I0731 04:17:23.341888    8864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:17:23.341906    8864 main.go:141] libmachine: STDERR: 
	I0731 04:17:23.341927    8864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2
	I0731 04:17:23.341932    8864 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:17:23.341967    8864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:03:4f:56:2a:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2
	I0731 04:17:23.343694    8864 main.go:141] libmachine: STDOUT: 
	I0731 04:17:23.343708    8864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:23.343727    8864 client.go:171] LocalClient.Create took 191.0245ms
	I0731 04:17:25.345912    8864 start.go:128] duration metric: createHost completed in 2.24991975s
	I0731 04:17:25.346044    8864 start.go:83] releasing machines lock for "newest-cni-000000", held for 2.250464625s
	W0731 04:17:25.346139    8864 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:25.362801    8864 out.go:177] * Deleting "newest-cni-000000" in qemu2 ...
	W0731 04:17:25.386858    8864 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:25.386883    8864 start.go:687] Will try again in 5 seconds ...
	I0731 04:17:30.388998    8864 start.go:365] acquiring machines lock for newest-cni-000000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:30.405259    8864 start.go:369] acquired machines lock for "newest-cni-000000" in 16.17925ms
	I0731 04:17:30.405304    8864 start.go:93] Provisioning new machine with config: &{Name:newest-cni-000000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cn
i-000000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 04:17:30.405482    8864 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 04:17:30.409346    8864 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 04:17:30.453604    8864 start.go:159] libmachine.API.Create for "newest-cni-000000" (driver="qemu2")
	I0731 04:17:30.453654    8864 client.go:168] LocalClient.Create starting
	I0731 04:17:30.453797    8864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/ca.pem
	I0731 04:17:30.453845    8864 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:30.453865    8864 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:30.453941    8864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16968-4815/.minikube/certs/cert.pem
	I0731 04:17:30.453968    8864 main.go:141] libmachine: Decoding PEM data...
	I0731 04:17:30.453981    8864 main.go:141] libmachine: Parsing certificate...
	I0731 04:17:30.454435    8864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0731 04:17:30.585600    8864 main.go:141] libmachine: Creating SSH key...
	I0731 04:17:30.674788    8864 main.go:141] libmachine: Creating Disk image...
	I0731 04:17:30.674798    8864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 04:17:30.674970    8864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2.raw /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2
	I0731 04:17:30.683913    8864 main.go:141] libmachine: STDOUT: 
	I0731 04:17:30.683941    8864 main.go:141] libmachine: STDERR: 
	I0731 04:17:30.684008    8864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2 +20000M
	I0731 04:17:30.697324    8864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 04:17:30.697340    8864 main.go:141] libmachine: STDERR: 
	I0731 04:17:30.697368    8864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2
	I0731 04:17:30.697376    8864 main.go:141] libmachine: Starting QEMU VM...
	I0731 04:17:30.697421    8864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:af:a8:ef:32:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2
	I0731 04:17:30.698950    8864 main.go:141] libmachine: STDOUT: 
	I0731 04:17:30.698965    8864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:30.698979    8864 client.go:171] LocalClient.Create took 245.323917ms
	I0731 04:17:32.701132    8864 start.go:128] duration metric: createHost completed in 2.295661916s
	I0731 04:17:32.701224    8864 start.go:83] releasing machines lock for "newest-cni-000000", held for 2.295992959s
	W0731 04:17:32.701675    8864 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-000000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-000000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:32.707384    8864 out.go:177] 
	W0731 04:17:32.711396    8864 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:17:32.711428    8864 out.go:239] * 
	* 
	W0731 04:17:32.714198    8864 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:17:32.723306    8864 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-000000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-000000 -n newest-cni-000000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-000000 -n newest-cni-000000: exit status 7 (69.488041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-000000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-127000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-127000 create -f testdata/busybox.yaml: exit status 1 (30.206083ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-127000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000: exit status 7 (33.009917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-127000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000: exit status 7 (32.522708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-127000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-127000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-127000 describe deploy/metrics-server -n kube-system: exit status 1 (25.337292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-127000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-127000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000: exit status 7 (27.591875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-127000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-127000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (6.898093459s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-127000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-127000 in cluster default-k8s-diff-port-127000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-127000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-127000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:17:23.569524    8892 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:17:23.569614    8892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:23.569618    8892 out.go:309] Setting ErrFile to fd 2...
	I0731 04:17:23.569620    8892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:23.569721    8892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:17:23.570680    8892 out.go:303] Setting JSON to false
	I0731 04:17:23.585720    8892 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10014,"bootTime":1690792229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:17:23.585799    8892 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:17:23.590356    8892 out.go:177] * [default-k8s-diff-port-127000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:17:23.597351    8892 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:17:23.601317    8892 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:17:23.597417    8892 notify.go:220] Checking for updates...
	I0731 04:17:23.607330    8892 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:17:23.610333    8892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:17:23.611627    8892 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:17:23.614264    8892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:17:23.617616    8892 config.go:182] Loaded profile config "default-k8s-diff-port-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:23.617867    8892 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:17:23.622126    8892 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 04:17:23.629258    8892 start.go:298] selected driver: qemu2
	I0731 04:17:23.629262    8892 start.go:898] validating driver "qemu2" against &{Name:default-k8s-diff-port-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:d
efault-k8s-diff-port-127000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:17:23.629311    8892 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:17:23.631211    8892 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 04:17:23.631234    8892 cni.go:84] Creating CNI manager for ""
	I0731 04:17:23.631241    8892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:17:23.631253    8892 start_flags.go:319] config:
	{Name:default-k8s-diff-port-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-127000 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:17:23.635250    8892 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:23.643287    8892 out.go:177] * Starting control plane node default-k8s-diff-port-127000 in cluster default-k8s-diff-port-127000
	I0731 04:17:23.647333    8892 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:17:23.647353    8892 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:17:23.647365    8892 cache.go:57] Caching tarball of preloaded images
	I0731 04:17:23.647426    8892 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:17:23.647432    8892 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:17:23.647501    8892 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/default-k8s-diff-port-127000/config.json ...
	I0731 04:17:23.647887    8892 start.go:365] acquiring machines lock for default-k8s-diff-port-127000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:25.346213    8892 start.go:369] acquired machines lock for "default-k8s-diff-port-127000" in 1.698316917s
	I0731 04:17:25.346390    8892 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:17:25.346425    8892 fix.go:54] fixHost starting: 
	I0731 04:17:25.347104    8892 fix.go:102] recreateIfNeeded on default-k8s-diff-port-127000: state=Stopped err=<nil>
	W0731 04:17:25.347143    8892 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:17:25.354435    8892 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-127000" ...
	I0731 04:17:25.367001    8892 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:eb:c0:99:d1:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2
	I0731 04:17:25.377489    8892 main.go:141] libmachine: STDOUT: 
	I0731 04:17:25.377547    8892 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:25.377671    8892 fix.go:56] fixHost completed within 31.256708ms
	I0731 04:17:25.377693    8892 start.go:83] releasing machines lock for "default-k8s-diff-port-127000", held for 31.423666ms
	W0731 04:17:25.377731    8892 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:17:25.377924    8892 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:25.378011    8892 start.go:687] Will try again in 5 seconds ...
	I0731 04:17:30.380156    8892 start.go:365] acquiring machines lock for default-k8s-diff-port-127000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:30.380726    8892 start.go:369] acquired machines lock for "default-k8s-diff-port-127000" in 473.791µs
	I0731 04:17:30.380883    8892 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:17:30.380902    8892 fix.go:54] fixHost starting: 
	I0731 04:17:30.381698    8892 fix.go:102] recreateIfNeeded on default-k8s-diff-port-127000: state=Stopped err=<nil>
	W0731 04:17:30.381726    8892 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:17:30.391388    8892 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-127000" ...
	I0731 04:17:30.395565    8892 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:eb:c0:99:d1:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/default-k8s-diff-port-127000/disk.qcow2
	I0731 04:17:30.405040    8892 main.go:141] libmachine: STDOUT: 
	I0731 04:17:30.405086    8892 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:30.405167    8892 fix.go:56] fixHost completed within 24.266333ms
	I0731 04:17:30.405186    8892 start.go:83] releasing machines lock for "default-k8s-diff-port-127000", held for 24.437458ms
	W0731 04:17:30.405341    8892 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-127000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-127000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:30.416461    8892 out.go:177] 
	W0731 04:17:30.420494    8892 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:17:30.420519    8892 out.go:239] * 
	* 
	W0731 04:17:30.425062    8892 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:17:30.431419    8892 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-127000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000: exit status 7 (47.570959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-127000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000: exit status 7 (32.584917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-127000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-127000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-127000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.384625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-127000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-127000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000: exit status 7 (32.484584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-127000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-127000 "sudo crictl images -o json": exit status 89 (43.016458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-127000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-127000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-127000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000: exit status 7 (28.849542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-127000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-127000 --alsologtostderr -v=1: exit status 89 (45.758791ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-127000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:17:30.685419    8913 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:17:30.685554    8913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:30.685557    8913 out.go:309] Setting ErrFile to fd 2...
	I0731 04:17:30.685560    8913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:30.685673    8913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:17:30.685870    8913 out.go:303] Setting JSON to false
	I0731 04:17:30.685880    8913 mustload.go:65] Loading cluster: default-k8s-diff-port-127000
	I0731 04:17:30.686068    8913 config.go:182] Loaded profile config "default-k8s-diff-port-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:30.690351    8913 out.go:177] * The control plane node must be running for this command
	I0731 04:17:30.700370    8913 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-127000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-127000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000: exit status 7 (27.743958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-127000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000: exit status 7 (27.935375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-000000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-000000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (5.168925791s)

                                                
                                                
-- stdout --
	* [newest-cni-000000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-000000 in cluster newest-cni-000000
	* Restarting existing qemu2 VM for "newest-cni-000000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-000000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:17:33.045442    8949 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:17:33.045580    8949 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:33.045583    8949 out.go:309] Setting ErrFile to fd 2...
	I0731 04:17:33.045585    8949 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:33.045690    8949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:17:33.046672    8949 out.go:303] Setting JSON to false
	I0731 04:17:33.061996    8949 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10024,"bootTime":1690792229,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:17:33.062057    8949 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:17:33.066910    8949 out.go:177] * [newest-cni-000000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:17:33.074028    8949 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:17:33.074127    8949 notify.go:220] Checking for updates...
	I0731 04:17:33.076981    8949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:17:33.079974    8949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:17:33.083010    8949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:17:33.084333    8949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:17:33.087007    8949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:17:33.090277    8949 config.go:182] Loaded profile config "newest-cni-000000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:33.090521    8949 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:17:33.094853    8949 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 04:17:33.101979    8949 start.go:298] selected driver: qemu2
	I0731 04:17:33.101984    8949 start.go:898] validating driver "qemu2" against &{Name:newest-cni-000000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-0
00000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmne
t Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:17:33.102052    8949 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:17:33.103955    8949 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 04:17:33.103975    8949 cni.go:84] Creating CNI manager for ""
	I0731 04:17:33.103982    8949 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 04:17:33.103991    8949 start_flags.go:319] config:
	{Name:newest-cni-000000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-000000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount
:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:17:33.108205    8949 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 04:17:33.114985    8949 out.go:177] * Starting control plane node newest-cni-000000 in cluster newest-cni-000000
	I0731 04:17:33.119004    8949 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 04:17:33.119026    8949 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 04:17:33.119035    8949 cache.go:57] Caching tarball of preloaded images
	I0731 04:17:33.119099    8949 preload.go:174] Found /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 04:17:33.119106    8949 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0731 04:17:33.119185    8949 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/newest-cni-000000/config.json ...
	I0731 04:17:33.119539    8949 start.go:365] acquiring machines lock for newest-cni-000000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:33.119572    8949 start.go:369] acquired machines lock for "newest-cni-000000" in 28.25µs
	I0731 04:17:33.119581    8949 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:17:33.119585    8949 fix.go:54] fixHost starting: 
	I0731 04:17:33.119696    8949 fix.go:102] recreateIfNeeded on newest-cni-000000: state=Stopped err=<nil>
	W0731 04:17:33.119704    8949 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:17:33.123048    8949 out.go:177] * Restarting existing qemu2 VM for "newest-cni-000000" ...
	I0731 04:17:33.131042    8949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:af:a8:ef:32:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2
	I0731 04:17:33.133014    8949 main.go:141] libmachine: STDOUT: 
	I0731 04:17:33.133032    8949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:33.133062    8949 fix.go:56] fixHost completed within 13.476792ms
	I0731 04:17:33.133067    8949 start.go:83] releasing machines lock for "newest-cni-000000", held for 13.491083ms
	W0731 04:17:33.133077    8949 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:17:33.133117    8949 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:33.133121    8949 start.go:687] Will try again in 5 seconds ...
	I0731 04:17:38.135220    8949 start.go:365] acquiring machines lock for newest-cni-000000: {Name:mkb744fb0fcb156faae763b79d3e98d9505736d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 04:17:38.135765    8949 start.go:369] acquired machines lock for "newest-cni-000000" in 450.625µs
	I0731 04:17:38.135897    8949 start.go:96] Skipping create...Using existing machine configuration
	I0731 04:17:38.135918    8949 fix.go:54] fixHost starting: 
	I0731 04:17:38.136626    8949 fix.go:102] recreateIfNeeded on newest-cni-000000: state=Stopped err=<nil>
	W0731 04:17:38.136653    8949 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 04:17:38.141020    8949 out.go:177] * Restarting existing qemu2 VM for "newest-cni-000000" ...
	I0731 04:17:38.145228    8949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:af:a8:ef:32:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/newest-cni-000000/disk.qcow2
	I0731 04:17:38.154573    8949 main.go:141] libmachine: STDOUT: 
	I0731 04:17:38.154654    8949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 04:17:38.154753    8949 fix.go:56] fixHost completed within 18.837625ms
	I0731 04:17:38.154776    8949 start.go:83] releasing machines lock for "newest-cni-000000", held for 18.986041ms
	W0731 04:17:38.155011    8949 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-000000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-000000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 04:17:38.161101    8949 out.go:177] 
	W0731 04:17:38.165133    8949 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 04:17:38.165157    8949 out.go:239] * 
	* 
	W0731 04:17:38.167848    8949 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 04:17:38.174816    8949 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-000000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-000000 -n newest-cni-000000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-000000 -n newest-cni-000000: exit status 7 (72.948125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-000000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-000000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-000000 "sudo crictl images -o json": exit status 89 (44.072125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-000000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-000000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-000000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-000000 -n newest-cni-000000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-000000 -n newest-cni-000000: exit status 7 (29.077708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-000000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-000000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-000000 --alsologtostderr -v=1: exit status 89 (39.823875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-000000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:17:38.362813    8963 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:17:38.362945    8963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:38.362948    8963 out.go:309] Setting ErrFile to fd 2...
	I0731 04:17:38.362950    8963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:17:38.363056    8963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:17:38.363267    8963 out.go:303] Setting JSON to false
	I0731 04:17:38.363275    8963 mustload.go:65] Loading cluster: newest-cni-000000
	I0731 04:17:38.363451    8963 config.go:182] Loaded profile config "newest-cni-000000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:17:38.367265    8963 out.go:177] * The control plane node must be running for this command
	I0731 04:17:38.371307    8963 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-000000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-000000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-000000 -n newest-cni-000000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-000000 -n newest-cni-000000: exit status 7 (29.208542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-000000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-000000 -n newest-cni-000000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-000000 -n newest-cni-000000: exit status 7 (29.460625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-000000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (136/244)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.27.3/json-events 11.41
11 TestDownloadOnly/v1.27.3/preload-exists 0
14 TestDownloadOnly/v1.27.3/kubectl 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.28
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.24
19 TestBinaryMirror 0.34
30 TestHyperKitDriverInstallOrUpdate 9.65
33 TestErrorSpam/setup 28.24
34 TestErrorSpam/start 0.35
35 TestErrorSpam/status 0.25
36 TestErrorSpam/pause 0.65
37 TestErrorSpam/unpause 0.65
38 TestErrorSpam/stop 12.24
41 TestFunctional/serial/CopySyncFile 0
42 TestFunctional/serial/StartWithProxy 45.5
43 TestFunctional/serial/AuditLog 0
44 TestFunctional/serial/SoftStart 35.54
45 TestFunctional/serial/KubeContext 0.03
46 TestFunctional/serial/KubectlGetPods 0.05
49 TestFunctional/serial/CacheCmd/cache/add_remote 5.79
50 TestFunctional/serial/CacheCmd/cache/add_local 1.17
51 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
52 TestFunctional/serial/CacheCmd/cache/list 0.03
53 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
54 TestFunctional/serial/CacheCmd/cache/cache_reload 1.29
55 TestFunctional/serial/CacheCmd/cache/delete 0.07
56 TestFunctional/serial/MinikubeKubectlCmd 0.43
57 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.57
58 TestFunctional/serial/ExtraConfig 279.38
59 TestFunctional/serial/ComponentHealth 0.04
60 TestFunctional/serial/LogsCmd 0.56
61 TestFunctional/serial/LogsFileCmd 0.57
62 TestFunctional/serial/InvalidService 4.22
64 TestFunctional/parallel/ConfigCmd 0.21
65 TestFunctional/parallel/DashboardCmd 12.91
66 TestFunctional/parallel/DryRun 0.21
67 TestFunctional/parallel/InternationalLanguage 0.11
68 TestFunctional/parallel/StatusCmd 0.26
73 TestFunctional/parallel/AddonsCmd 0.17
74 TestFunctional/parallel/PersistentVolumeClaim 24.14
76 TestFunctional/parallel/SSHCmd 0.14
77 TestFunctional/parallel/CpCmd 0.29
79 TestFunctional/parallel/FileSync 0.08
80 TestFunctional/parallel/CertSync 0.45
84 TestFunctional/parallel/NodeLabels 0.08
86 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
88 TestFunctional/parallel/License 0.61
90 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
91 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
93 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.12
94 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.03
95 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
96 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
97 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
98 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
100 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
101 TestFunctional/parallel/ServiceCmd/List 0.31
102 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
103 TestFunctional/parallel/ServiceCmd/HTTPS 0.13
104 TestFunctional/parallel/ServiceCmd/Format 0.11
105 TestFunctional/parallel/ServiceCmd/URL 0.11
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
107 TestFunctional/parallel/ProfileCmd/profile_list 0.16
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
109 TestFunctional/parallel/MountCmd/any-port 6.46
110 TestFunctional/parallel/MountCmd/specific-port 0.91
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.2
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.09
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.9
119 TestFunctional/parallel/ImageCommands/Setup 2.55
120 TestFunctional/parallel/DockerEnv/bash 0.53
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.12
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.52
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.51
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.6
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.59
131 TestFunctional/delete_addon-resizer_images 0.12
132 TestFunctional/delete_my-image_image 0.04
133 TestFunctional/delete_minikube_cached_images 0.04
137 TestImageBuild/serial/Setup 28.48
138 TestImageBuild/serial/NormalBuild 2.12
140 TestImageBuild/serial/BuildWithDockerIgnore 0.12
141 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
144 TestIngressAddonLegacy/StartLegacyK8sCluster 69.31
146 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.94
147 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.26
151 TestJSONOutput/start/Command 43.9
152 TestJSONOutput/start/Audit 0
154 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/pause/Command 0.26
158 TestJSONOutput/pause/Audit 0
160 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/unpause/Command 0.22
164 TestJSONOutput/unpause/Audit 0
166 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/stop/Command 9.08
170 TestJSONOutput/stop/Audit 0
172 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
174 TestErrorJSONOutput 0.33
179 TestMainNoArgs 0.03
180 TestMinikubeProfile 61.11
236 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
240 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
241 TestNoKubernetes/serial/ProfileList 0.15
242 TestNoKubernetes/serial/Stop 0.06
244 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
262 TestStartStop/group/old-k8s-version/serial/Stop 0.06
263 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
273 TestStartStop/group/no-preload/serial/Stop 0.06
274 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
284 TestStartStop/group/embed-certs/serial/Stop 0.06
285 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
295 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
296 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
302 TestStartStop/group/newest-cni/serial/DeployApp 0
303 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
304 TestStartStop/group/newest-cni/serial/Stop 0.06
305 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
307 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-435000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-435000: exit status 85 (94.014334ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-435000 | jenkins | v1.31.1 | 31 Jul 23 03:53 PDT |          |
	|         | -p download-only-435000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 03:53:36
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 03:53:36.833088    5225 out.go:296] Setting OutFile to fd 1 ...
	I0731 03:53:36.833201    5225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 03:53:36.833205    5225 out.go:309] Setting ErrFile to fd 2...
	I0731 03:53:36.833207    5225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 03:53:36.833341    5225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	W0731 03:53:36.833400    5225 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/16968-4815/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16968-4815/.minikube/config/config.json: no such file or directory
	I0731 03:53:36.834542    5225 out.go:303] Setting JSON to true
	I0731 03:53:36.852302    5225 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8587,"bootTime":1690792229,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 03:53:36.852385    5225 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 03:53:36.857693    5225 out.go:97] [download-only-435000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 03:53:36.860836    5225 out.go:169] MINIKUBE_LOCATION=16968
	I0731 03:53:36.857788    5225 notify.go:220] Checking for updates...
	W0731 03:53:36.857807    5225 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 03:53:36.867623    5225 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 03:53:36.870820    5225 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 03:53:36.873865    5225 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 03:53:36.876842    5225 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	W0731 03:53:36.882798    5225 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 03:53:36.883003    5225 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 03:53:36.885820    5225 out.go:97] Using the qemu2 driver based on user configuration
	I0731 03:53:36.885838    5225 start.go:298] selected driver: qemu2
	I0731 03:53:36.885841    5225 start.go:898] validating driver "qemu2" against <nil>
	I0731 03:53:36.885896    5225 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 03:53:36.888829    5225 out.go:169] Automatically selected the socket_vmnet network
	I0731 03:53:36.894029    5225 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 03:53:36.894118    5225 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 03:53:36.894178    5225 cni.go:84] Creating CNI manager for ""
	I0731 03:53:36.894195    5225 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 03:53:36.894204    5225 start_flags.go:319] config:
	{Name:download-only-435000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-435000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: N
etworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 03:53:36.898740    5225 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 03:53:36.903158    5225 out.go:97] Downloading VM boot image ...
	I0731 03:53:36.903199    5225 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso
	I0731 03:53:50.268959    5225 out.go:97] Starting control plane node download-only-435000 in cluster download-only-435000
	I0731 03:53:50.268970    5225 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0731 03:53:50.367320    5225 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0731 03:53:50.367370    5225 cache.go:57] Caching tarball of preloaded images
	I0731 03:53:50.368297    5225 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0731 03:53:50.372599    5225 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0731 03:53:50.372608    5225 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 03:53:50.593729    5225 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0731 03:54:02.354713    5225 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 03:54:02.354887    5225 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 03:54:02.996549    5225 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0731 03:54:02.996744    5225 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/download-only-435000/config.json ...
	I0731 03:54:02.996764    5225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/download-only-435000/config.json: {Name:mk709c968bf792ce50f91e1c718b5910675af98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 03:54:02.997024    5225 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0731 03:54:02.997189    5225 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0731 03:54:03.479156    5225 out.go:169] 
	W0731 03:54:03.483187    5225 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16968-4815/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1063d8690 0x1063d8690 0x1063d8690 0x1063d8690 0x1063d8690 0x1063d8690 0x1063d8690] Decompressors:map[bz2:0x140001e7340 gz:0x140001e7348 tar:0x140001e72a0 tar.bz2:0x140001e72d0 tar.gz:0x140001e72e0 tar.xz:0x140001e72f0 tar.zst:0x140001e7330 tbz2:0x140001e72d0 tgz:0x140001e72e0 txz:0x140001e72f0 tzst:0x140001e7330 xz:0x140001e7350 zip:0x140001e7370 zst:0x140001e7358] Getters:map[file:0x14000f3c5b0 http:0x140007aa190 https:0x140007aa1e0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0731 03:54:03.483212    5225 out_reason.go:110] 
	W0731 03:54:03.492059    5225 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 03:54:03.496092    5225 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-435000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (11.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-435000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-435000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=qemu2 : (11.408852875s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (11.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
--- PASS: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-435000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-435000: exit status 85 (75.445375ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-435000 | jenkins | v1.31.1 | 31 Jul 23 03:53 PDT |          |
	|         | -p download-only-435000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-435000 | jenkins | v1.31.1 | 31 Jul 23 03:54 PDT |          |
	|         | -p download-only-435000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 03:54:03
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 03:54:03.687303    5238 out.go:296] Setting OutFile to fd 1 ...
	I0731 03:54:03.687416    5238 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 03:54:03.687419    5238 out.go:309] Setting ErrFile to fd 2...
	I0731 03:54:03.687421    5238 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 03:54:03.687524    5238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	W0731 03:54:03.687582    5238 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/16968-4815/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16968-4815/.minikube/config/config.json: no such file or directory
	I0731 03:54:03.688469    5238 out.go:303] Setting JSON to true
	I0731 03:54:03.704021    5238 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8614,"bootTime":1690792229,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 03:54:03.704093    5238 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 03:54:03.708227    5238 out.go:97] [download-only-435000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 03:54:03.712111    5238 out.go:169] MINIKUBE_LOCATION=16968
	I0731 03:54:03.708297    5238 notify.go:220] Checking for updates...
	I0731 03:54:03.718129    5238 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 03:54:03.721175    5238 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 03:54:03.724150    5238 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 03:54:03.727174    5238 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	W0731 03:54:03.733139    5238 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 03:54:03.733407    5238 config.go:182] Loaded profile config "download-only-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0731 03:54:03.733436    5238 start.go:806] api.Load failed for download-only-435000: filestore "download-only-435000": Docker machine "download-only-435000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0731 03:54:03.733482    5238 driver.go:373] Setting default libvirt URI to qemu:///system
	W0731 03:54:03.733493    5238 start.go:806] api.Load failed for download-only-435000: filestore "download-only-435000": Docker machine "download-only-435000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0731 03:54:03.736091    5238 out.go:97] Using the qemu2 driver based on existing profile
	I0731 03:54:03.736098    5238 start.go:298] selected driver: qemu2
	I0731 03:54:03.736100    5238 start.go:898] validating driver "qemu2" against &{Name:download-only-435000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-
only-435000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 03:54:03.737964    5238 cni.go:84] Creating CNI manager for ""
	I0731 03:54:03.737983    5238 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 03:54:03.737988    5238 start_flags.go:319] config:
	{Name:download-only-435000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-435000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_v
mnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 03:54:03.741762    5238 iso.go:125] acquiring lock: {Name:mk8b6cacb20e74bb17d3b4be8b3bbaf9d5f950e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 03:54:03.745108    5238 out.go:97] Starting control plane node download-only-435000 in cluster download-only-435000
	I0731 03:54:03.745114    5238 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 03:54:03.966286    5238 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 03:54:03.966369    5238 cache.go:57] Caching tarball of preloaded images
	I0731 03:54:03.967126    5238 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0731 03:54:03.971333    5238 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0731 03:54:03.971362    5238 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 03:54:04.193388    5238 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4?checksum=md5:e061b1178966dc348ac19219444153f4 -> /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0731 03:54:13.344131    5238 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 03:54:13.344281    5238 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16968-4815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-435000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-435000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-598000 --alsologtostderr --binary-mirror http://127.0.0.1:50155 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-598000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-598000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.65s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.65s)

                                                
                                    
x
+
TestErrorSpam/setup (28.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-132000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-132000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 --driver=qemu2 : (28.240172584s)
--- PASS: TestErrorSpam/setup (28.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 pause
--- PASS: TestErrorSpam/pause (0.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 unpause
--- PASS: TestErrorSpam/unpause (0.65s)

                                                
                                    
x
+
TestErrorSpam/stop (12.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 stop: (12.078907125s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-132000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-132000 stop
--- PASS: TestErrorSpam/stop (12.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/16968-4815/.minikube/files/etc/test/nested/copy/5223/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.5s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-652000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-652000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (45.49679825s)
--- PASS: TestFunctional/serial/StartWithProxy (45.50s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.54s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-652000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-652000 --alsologtostderr -v=8: (35.540190667s)
functional_test.go:659: soft start took 35.540663875s for "functional-652000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.54s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-652000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-652000 cache add registry.k8s.io/pause:3.1: (2.16243775s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-652000 cache add registry.k8s.io/pause:3.3: (2.00475275s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-652000 cache add registry.k8s.io/pause:latest: (1.621977834s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-652000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3750780560/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 cache add minikube-local-cache-test:functional-652000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 cache delete minikube-local-cache-test:functional-652000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-652000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (72.532292ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-652000 cache reload: (1.059747917s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 kubectl -- --context functional-652000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.43s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-652000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.57s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (279.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-652000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-652000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m39.3796895s)
functional_test.go:757: restart took 4m39.379804375s for "functional-652000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (279.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-652000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2512385013/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-652000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-652000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-652000: exit status 115 (156.798792ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.14:31077 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-652000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 config get cpus: exit status 14 (29.280125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 config get cpus: exit status 14 (28.755ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-652000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-652000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 5890: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-652000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-652000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (105.932459ms)

                                                
                                                
-- stdout --
	* [functional-652000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:02:39.636315    5877 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:02:39.636432    5877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:02:39.636435    5877 out.go:309] Setting ErrFile to fd 2...
	I0731 04:02:39.636437    5877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:02:39.636541    5877 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:02:39.637545    5877 out.go:303] Setting JSON to false
	I0731 04:02:39.653016    5877 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9130,"bootTime":1690792229,"procs":424,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:02:39.653090    5877 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:02:39.657283    5877 out.go:177] * [functional-652000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	I0731 04:02:39.660218    5877 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:02:39.663243    5877 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:02:39.660312    5877 notify.go:220] Checking for updates...
	I0731 04:02:39.670191    5877 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:02:39.673241    5877 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:02:39.676275    5877 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:02:39.679253    5877 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:02:39.682540    5877 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:02:39.682761    5877 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:02:39.687206    5877 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 04:02:39.694256    5877 start.go:298] selected driver: qemu2
	I0731 04:02:39.694260    5877 start.go:898] validating driver "qemu2" against &{Name:functional-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-6
52000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.14 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mou
ntString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:02:39.694302    5877 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:02:39.700170    5877 out.go:177] 
	W0731 04:02:39.704205    5877 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 04:02:39.707177    5877 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-652000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-652000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-652000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (107.457792ms)

                                                
                                                
-- stdout --
	* [functional-652000] minikube v1.31.1 sur Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 04:02:39.524926    5873 out.go:296] Setting OutFile to fd 1 ...
	I0731 04:02:39.525027    5873 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:02:39.525030    5873 out.go:309] Setting ErrFile to fd 2...
	I0731 04:02:39.525033    5873 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 04:02:39.525156    5873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
	I0731 04:02:39.526604    5873 out.go:303] Setting JSON to false
	I0731 04:02:39.544588    5873 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9130,"bootTime":1690792229,"procs":424,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 04:02:39.544656    5873 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0731 04:02:39.549417    5873 out.go:177] * [functional-652000] minikube v1.31.1 sur Darwin 13.4.1 (arm64)
	I0731 04:02:39.557237    5873 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 04:02:39.557278    5873 notify.go:220] Checking for updates...
	I0731 04:02:39.560244    5873 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	I0731 04:02:39.564247    5873 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 04:02:39.567250    5873 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 04:02:39.570206    5873 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	I0731 04:02:39.573211    5873 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 04:02:39.576486    5873 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0731 04:02:39.576723    5873 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 04:02:39.580165    5873 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0731 04:02:39.587227    5873 start.go:298] selected driver: qemu2
	I0731 04:02:39.587232    5873 start.go:898] validating driver "qemu2" against &{Name:functional-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-6
52000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.14 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mou
ntString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 04:02:39.587291    5873 start.go:909] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 04:02:39.593195    5873 out.go:177] 
	W0731 04:02:39.597261    5873 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 04:02:39.601243    5873 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4291671e-461c-4045-a8d8-0e34c8faaeb4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0181635s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-652000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-652000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-652000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-652000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4a7714ab-45fc-4869-a6f5-f1c775d02242] Pending
helpers_test.go:344: "sp-pod" [4a7714ab-45fc-4869-a6f5-f1c775d02242] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4a7714ab-45fc-4869-a6f5-f1c775d02242] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.016033291s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-652000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-652000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-652000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [398da703-fd43-4953-9701-fb0ae6c7aa59] Pending
helpers_test.go:344: "sp-pod" [398da703-fd43-4953-9701-fb0ae6c7aa59] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [398da703-fd43-4953-9701-fb0ae6c7aa59] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.015028833s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-652000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh -n functional-652000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 cp functional-652000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1705913254/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh -n functional-652000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/5223/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "sudo cat /etc/test/nested/copy/5223/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/5223.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "sudo cat /etc/ssl/certs/5223.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/5223.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "sudo cat /usr/share/ca-certificates/5223.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/52232.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "sudo cat /etc/ssl/certs/52232.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/52232.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "sudo cat /usr/share/ca-certificates/52232.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-652000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 ssh "sudo systemctl is-active crio": exit status 1 (67.401084ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-652000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-652000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-652000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 5704: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-652000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-652000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-652000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [af526f46-f3c3-4ae9-94ef-e4f4a2b9943d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [af526f46-f3c3-4ae9-94ef-e4f4a2b9943d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005194709s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-652000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.124.5 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-652000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-652000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-652000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-7cxn2" [c6a5cdab-4b5d-47ca-a596-cdc64e131dc8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-7cxn2" [c6a5cdab-4b5d-47ca-a596-cdc64e131dc8] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.012886417s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 service list -o json
functional_test.go:1493: Took "296.032959ms" to run "out/minikube-darwin-arm64 -p functional-652000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.14:31993
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.14:31993
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "128.72875ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "35.733167ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "115.142167ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "33.49325ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-652000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1295519227/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1690801348818439000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1295519227/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1690801348818439000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1295519227/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1690801348818439000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1295519227/001/test-1690801348818439000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.138584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 11:02 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 11:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 11:02 test-1690801348818439000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh cat /mount-9p/test-1690801348818439000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-652000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ad9b6d0d-34bb-467b-85c6-532f859156e5] Pending
helpers_test.go:344: "busybox-mount" [ad9b6d0d-34bb-467b-85c6-532f859156e5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ad9b6d0d-34bb-467b-85c6-532f859156e5] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ad9b6d0d-34bb-467b-85c6-532f859156e5] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.008610084s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-652000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-652000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1295519227/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-652000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2575899690/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (67.736084ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-652000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2575899690/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 ssh "sudo umount -f /mount-9p": exit status 1 (66.57375ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-652000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-652000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2575899690/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-652000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.27.3           | fb73e92641fd5 | 66.5MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/google-containers/addon-resizer      | functional-652000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | alpine            | 66bf2c914bf4d | 41MB   |
| registry.k8s.io/kube-scheduler              | v1.27.3           | bcb9e554eaab6 | 56.2MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-652000 | 8e6b0f677e162 | 30B    |
| registry.k8s.io/etcd                        | 3.5.7-0           | 24bc64e911039 | 181MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| docker.io/library/nginx                     | latest            | ff78c7a65ec2b | 192MB  |
| registry.k8s.io/kube-controller-manager     | v1.27.3           | ab3683b584ae5 | 107MB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.27.3           | 39dfb036b0986 | 115MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-652000 image ls --format table --alsologtostderr:
I0731 04:03:05.498076    6067 out.go:296] Setting OutFile to fd 1 ...
I0731 04:03:05.498228    6067 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 04:03:05.498231    6067 out.go:309] Setting ErrFile to fd 2...
I0731 04:03:05.498233    6067 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 04:03:05.498348    6067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
I0731 04:03:05.498767    6067 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0731 04:03:05.498824    6067 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0731 04:03:05.499743    6067 ssh_runner.go:195] Run: systemctl --version
I0731 04:03:05.499754    6067 sshutil.go:53] new ssh client: &{IP:192.168.105.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/functional-652000/id_rsa Username:docker}
I0731 04:03:05.538262    6067 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-652000 image ls --format json --alsologtostderr:
[{"id":"fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"66500000"},{"id":"ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"107000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"ff78c7a65ec2b1fb09f58b27b0dd022ac1f4e16b9bcfe1cbdc18c36f2e0e1842","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"24bc64e911039ecf00e263be2161797c758b7d
82403ca5516ab64047a477f737","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"181000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-652000"],"size":"32900000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["
registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"115000000"},{"id":"bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"56200000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8e6b0f677e162e1bbfbee8b72156f03cf01c61d0d1e8145d7aca7061a17ce148","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functio
nal-652000"],"size":"30"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-652000 image ls --format json --alsologtostderr:
I0731 04:03:05.416615    6063 out.go:296] Setting OutFile to fd 1 ...
I0731 04:03:05.416771    6063 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 04:03:05.416775    6063 out.go:309] Setting ErrFile to fd 2...
I0731 04:03:05.416778    6063 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 04:03:05.416893    6063 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
I0731 04:03:05.417307    6063 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0731 04:03:05.417364    6063 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0731 04:03:05.418265    6063 ssh_runner.go:195] Run: systemctl --version
I0731 04:03:05.418276    6063 sshutil.go:53] new ssh client: &{IP:192.168.105.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/functional-652000/id_rsa Username:docker}
I0731 04:03:05.450974    6063 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-652000 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-652000
size: "32900000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: ff78c7a65ec2b1fb09f58b27b0dd022ac1f4e16b9bcfe1cbdc18c36f2e0e1842
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "107000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "66500000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "181000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8e6b0f677e162e1bbfbee8b72156f03cf01c61d0d1e8145d7aca7061a17ce148
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-652000
size: "30"
- id: 39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "115000000"
- id: bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "56200000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-652000 image ls --format yaml --alsologtostderr:
I0731 04:03:05.332971    6057 out.go:296] Setting OutFile to fd 1 ...
I0731 04:03:05.333125    6057 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 04:03:05.333128    6057 out.go:309] Setting ErrFile to fd 2...
I0731 04:03:05.333131    6057 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 04:03:05.333253    6057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
I0731 04:03:05.333645    6057 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0731 04:03:05.333701    6057 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0731 04:03:05.334497    6057 ssh_runner.go:195] Run: systemctl --version
I0731 04:03:05.334509    6057 sshutil.go:53] new ssh client: &{IP:192.168.105.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/functional-652000/id_rsa Username:docker}
I0731 04:03:05.369893    6057 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 ssh pgrep buildkitd: exit status 1 (70.667375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image build -t localhost/my-image:functional-652000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-652000 image build -t localhost/my-image:functional-652000 testdata/build --alsologtostderr: (2.746994042s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-652000 image build -t localhost/my-image:functional-652000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 4ea2338dc783
Removing intermediate container 4ea2338dc783
---> f57a38270f26
Step 3/3 : ADD content.txt /
---> ec485b7d31e5
Successfully built ec485b7d31e5
Successfully tagged localhost/my-image:functional-652000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-652000 image build -t localhost/my-image:functional-652000 testdata/build --alsologtostderr:
I0731 04:03:05.440378    6065 out.go:296] Setting OutFile to fd 1 ...
I0731 04:03:05.440584    6065 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 04:03:05.440589    6065 out.go:309] Setting ErrFile to fd 2...
I0731 04:03:05.440592    6065 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 04:03:05.440712    6065 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16968-4815/.minikube/bin
I0731 04:03:05.441125    6065 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0731 04:03:05.441849    6065 config.go:182] Loaded profile config "functional-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0731 04:03:05.442746    6065 ssh_runner.go:195] Run: systemctl --version
I0731 04:03:05.442757    6065 sshutil.go:53] new ssh client: &{IP:192.168.105.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16968-4815/.minikube/machines/functional-652000/id_rsa Username:docker}
I0731 04:03:05.475359    6065 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2402204873.tar
I0731 04:03:05.475413    6065 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 04:03:05.478472    6065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2402204873.tar
I0731 04:03:05.480064    6065 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2402204873.tar: stat -c "%s %y" /var/lib/minikube/build/build.2402204873.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2402204873.tar': No such file or directory
I0731 04:03:05.480088    6065 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2402204873.tar --> /var/lib/minikube/build/build.2402204873.tar (3072 bytes)
I0731 04:03:05.488314    6065 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2402204873
I0731 04:03:05.491501    6065 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2402204873 -xf /var/lib/minikube/build/build.2402204873.tar
I0731 04:03:05.494331    6065 docker.go:339] Building image: /var/lib/minikube/build/build.2402204873
I0731 04:03:05.494400    6065 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-652000 /var/lib/minikube/build/build.2402204873
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0731 04:03:08.146960    6065 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-652000 /var/lib/minikube/build/build.2402204873: (2.652606709s)
I0731 04:03:08.147042    6065 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2402204873
I0731 04:03:08.150028    6065 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2402204873.tar
I0731 04:03:08.152793    6065 build_images.go:207] Built localhost/my-image:functional-652000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2402204873.tar
I0731 04:03:08.152809    6065 build_images.go:123] succeeded building to: functional-652000
I0731 04:03:08.152811    6065 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2023/07/31 04:02:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.513794833s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-652000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-652000 docker-env) && out/minikube-darwin-arm64 status -p functional-652000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-652000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image load --daemon gcr.io/google-containers/addon-resizer:functional-652000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-652000 image load --daemon gcr.io/google-containers/addon-resizer:functional-652000 --alsologtostderr: (2.042599333s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image load --daemon gcr.io/google-containers/addon-resizer:functional-652000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-652000 image load --daemon gcr.io/google-containers/addon-resizer:functional-652000 --alsologtostderr: (1.441769083s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.485354666s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-652000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image load --daemon gcr.io/google-containers/addon-resizer:functional-652000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-652000 image load --daemon gcr.io/google-containers/addon-resizer:functional-652000 --alsologtostderr: (1.892001625s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image save gcr.io/google-containers/addon-resizer:functional-652000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image rm gcr.io/google-containers/addon-resizer:functional-652000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-652000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 image save --daemon gcr.io/google-containers/addon-resizer:functional-652000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-arm64 -p functional-652000 image save --daemon gcr.io/google-containers/addon-resizer:functional-652000 --alsologtostderr: (1.510644125s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-652000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-652000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-652000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-652000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (28.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-484000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-484000 --driver=qemu2 : (28.475271208s)
--- PASS: TestImageBuild/serial/Setup (28.48s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-484000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-484000: (2.120171792s)
--- PASS: TestImageBuild/serial/NormalBuild (2.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-484000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-484000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (69.31s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-464000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-464000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m9.3113315s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (69.31s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-464000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-464000 addons enable ingress --alsologtostderr -v=5: (17.935788208s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.94s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-464000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.26s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-453000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-453000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (43.898779166s)
--- PASS: TestJSONOutput/start/Command (43.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.26s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-453000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.26s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.22s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-453000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.22s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-453000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-453000 --output=json --user=testUser: (9.078842083s)
--- PASS: TestJSONOutput/stop/Command (9.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-557000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-557000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.657042ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7547ca55-6317-4696-a85c-b96c585e4545","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-557000] minikube v1.31.1 on Darwin 13.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a0a7b26-c556-4f18-8884-507d5e60cc18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16968"}}
	{"specversion":"1.0","id":"3878f544-9b8e-4ce1-8ab0-f8cb780a21a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig"}}
	{"specversion":"1.0","id":"966f4ee1-350f-4c6e-8167-5c05b41ff2d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ee51d326-83fc-4ea4-850d-868bbe07d673","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"45bc0b6b-67e3-474b-90c8-389a4cc99f6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube"}}
	{"specversion":"1.0","id":"33249888-9a2b-434d-a73e-5e736e6d56ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"05e38139-f9f1-4329-8380-4d4a44914490","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-557000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-557000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (61.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-284000 --driver=qemu2 
E0731 04:06:57.039437    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:06:57.046346    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:06:57.058362    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:06:57.080400    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:06:57.122421    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:06:57.204437    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:06:57.366490    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:06:57.688513    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:06:58.330556    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:06:59.612618    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:07:02.174936    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:07:07.295122    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
E0731 04:07:17.536338    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-284000 --driver=qemu2 : (29.263703542s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-285000 --driver=qemu2 
E0731 04:07:38.016057    5223 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16968-4815/.minikube/profiles/functional-652000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-285000 --driver=qemu2 : (31.033177084s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-284000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-285000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-285000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-285000
helpers_test.go:175: Cleaning up "first-284000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-284000
--- PASS: TestMinikubeProfile (61.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-578000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-578000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (90.838541ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-578000] minikube v1.31.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16968-4815/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16968-4815/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-578000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-578000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.511958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-578000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-578000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-578000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-578000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.434875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-578000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-611000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-611000 -n old-k8s-version-611000: exit status 7 (28.092125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-611000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-775000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-775000 -n no-preload-775000: exit status 7 (28.038459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-775000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-775000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-775000 -n embed-certs-775000: exit status 7 (27.032583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-775000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-127000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-127000 -n default-k8s-diff-port-127000: exit status 7 (27.92675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-127000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-000000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-000000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-000000 -n newest-cni-000000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-000000 -n newest-cni-000000: exit status 7 (28.95925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-000000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/244)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (14.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-652000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2232768490/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-652000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2232768490/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-652000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2232768490/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount1: exit status 80 (83.2635ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/16968-4815/.minikube/machines/functional-652000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_mount_09c2d51488d053dd16b3ec072814721ac671f1ce_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount3: exit status 1 (62.920916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount3: exit status 1 (62.595125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount3: exit status 1 (66.750959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount3: exit status 1 (62.502ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount3: exit status 1 (64.748709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-652000 ssh "findmnt -T" /mount3: exit status 1 (63.352667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-652000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2232768490/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-652000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2232768490/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-652000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2232768490/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (14.92s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-525000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-525000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-525000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-525000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-525000"

                                                
                                                
----------------------- debugLogs end: cilium-525000 [took: 2.094478209s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-525000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-525000
--- SKIP: TestNetworkPlugins/group/cilium (2.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-262000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-262000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
Copied to clipboard