Test Report: QEMU_macOS 18320

                    
                      135c3c98ed62ac5bccf3530555abd368cdd0fde3:2024-03-07:33456
                    
                

Test fail (92/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 41.14
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.9
39 TestAddons/parallel/Ingress 34.45
54 TestCertOptions 10.24
55 TestCertExpiration 195.39
56 TestDockerFlags 10.28
57 TestForceSystemdFlag 10.04
58 TestForceSystemdEnv 10.01
103 TestFunctional/parallel/ServiceCmdConnect 39.67
175 TestMutliControlPlane/serial/StopSecondaryNode 312.31
176 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 227.3
177 TestMutliControlPlane/serial/RestartSecondaryNode 305.26
179 TestMutliControlPlane/serial/RestartClusterKeepsNodes 341.44
182 TestImageBuild/serial/Setup 9.91
185 TestJSONOutput/start/Command 9.77
191 TestJSONOutput/pause/Command 0.08
197 TestJSONOutput/unpause/Command 0.04
214 TestMinikubeProfile 10.28
217 TestMountStart/serial/StartWithMountFirst 10.56
220 TestMultiNode/serial/FreshStart2Nodes 9.88
221 TestMultiNode/serial/DeployApp2Nodes 97.3
222 TestMultiNode/serial/PingHostFrom2Pods 0.09
223 TestMultiNode/serial/AddNode 0.08
224 TestMultiNode/serial/MultiNodeLabels 0.06
225 TestMultiNode/serial/ProfileList 0.1
226 TestMultiNode/serial/CopyFile 0.06
227 TestMultiNode/serial/StopNode 0.14
228 TestMultiNode/serial/StartAfterStop 49.25
229 TestMultiNode/serial/RestartKeepsNodes 9.07
230 TestMultiNode/serial/DeleteNode 0.11
231 TestMultiNode/serial/StopMultiNode 1.99
232 TestMultiNode/serial/RestartMultiNode 5.26
233 TestMultiNode/serial/ValidateNameConflict 20.03
237 TestPreload 9.96
239 TestScheduledStopUnix 10.08
240 TestSkaffold 16.58
243 TestRunningBinaryUpgrade 629.28
245 TestKubernetesUpgrade 17.23
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.94
259 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.42
261 TestStoppedBinaryUpgrade/Upgrade 579.24
263 TestPause/serial/Start 9.92
273 TestNoKubernetes/serial/StartWithK8s 9.82
274 TestNoKubernetes/serial/StartWithStopK8s 5.92
275 TestNoKubernetes/serial/Start 5.87
279 TestNoKubernetes/serial/StartNoArgs 6.4
281 TestNetworkPlugins/group/auto/Start 9.9
282 TestNetworkPlugins/group/kindnet/Start 9.93
283 TestNetworkPlugins/group/calico/Start 9.91
284 TestNetworkPlugins/group/custom-flannel/Start 9.76
285 TestNetworkPlugins/group/false/Start 9.72
286 TestNetworkPlugins/group/enable-default-cni/Start 9.73
287 TestNetworkPlugins/group/flannel/Start 9.92
289 TestNetworkPlugins/group/bridge/Start 9.78
290 TestNetworkPlugins/group/kubenet/Start 9.78
292 TestStartStop/group/old-k8s-version/serial/FirstStart 9.89
294 TestStartStop/group/no-preload/serial/FirstStart 10.07
295 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.14
299 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
300 TestStartStop/group/no-preload/serial/DeployApp 0.1
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
304 TestStartStop/group/no-preload/serial/SecondStart 5.27
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
308 TestStartStop/group/old-k8s-version/serial/Pause 0.11
310 TestStartStop/group/embed-certs/serial/FirstStart 9.98
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
314 TestStartStop/group/no-preload/serial/Pause 0.1
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.94
317 TestStartStop/group/embed-certs/serial/DeployApp 0.09
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
321 TestStartStop/group/embed-certs/serial/SecondStart 5.77
322 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
326 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
330 TestStartStop/group/embed-certs/serial/Pause 0.11
332 TestStartStop/group/newest-cni/serial/FirstStart 10.11
333 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
334 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
335 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
336 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
341 TestStartStop/group/newest-cni/serial/SecondStart 5.26
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
345 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (41.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-510000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-510000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (41.138072458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e9fada0e-662b-4c1a-bda5-2b7548d67dc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-510000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"df6cf61b-02ae-4b5d-9d5b-8724266a96d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18320"}}
	{"specversion":"1.0","id":"2000e9db-72bb-421b-b4eb-ec5714c418ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig"}}
	{"specversion":"1.0","id":"8f9365b3-c00c-43f3-987a-2e48cc9f74db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b6998dce-d2a5-488c-9ef5-63cef020dda9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8e0e0b5f-b02f-4d10-8907-606f9acbfbfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube"}}
	{"specversion":"1.0","id":"aa2cd1c0-4570-4768-b96d-56d7cab066b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"5857f383-bf76-4181-a025-c86199977bc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c215e9cb-dea0-4e9d-a22f-fe6d9130ed24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"9580c3ed-b70c-4848-b546-69ec746bcbe5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"22c1c894-266b-4bf1-bba1-0141c95f086e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-510000\" primary control-plane node in \"download-only-510000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7e42d66-cfb7-4722-abde-991c8213df4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"93c65554-cf99-43a0-85e8-4e8cd67e64a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108e570a0 0x108e570a0 0x108e570a0 0x108e570a0 0x108e570a0 0x108e570a0 0x108e570a0] Decompressors:map[bz2:0x14000898df0 gz:0x14000898df8 tar:0x14000898da0 tar.bz2:0x14000898db0 tar.gz:0x14000898dc0 tar.xz:0x14000898dd0 tar.zst:0x14000898de0 tbz2:0x14000898db0 tgz:0x14
000898dc0 txz:0x14000898dd0 tzst:0x14000898de0 xz:0x14000898e00 zip:0x14000898e10 zst:0x14000898e08] Getters:map[file:0x140020d2570 http:0x140005fe230 https:0x140005fe280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"1ff246d6-b188-4a05-bfce-844b05680cf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 13:46:27.547614    1598 out.go:291] Setting OutFile to fd 1 ...
	I0307 13:46:27.547748    1598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:46:27.547751    1598 out.go:304] Setting ErrFile to fd 2...
	I0307 13:46:27.547753    1598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:46:27.547891    1598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	W0307 13:46:27.547968    1598 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18320-1153/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18320-1153/.minikube/config/config.json: no such file or directory
	I0307 13:46:27.549218    1598 out.go:298] Setting JSON to true
	I0307 13:46:27.566597    1598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":944,"bootTime":1709847043,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 13:46:27.566659    1598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 13:46:27.577159    1598 out.go:97] [download-only-510000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 13:46:27.581097    1598 out.go:169] MINIKUBE_LOCATION=18320
	I0307 13:46:27.577323    1598 notify.go:220] Checking for updates...
	W0307 13:46:27.577370    1598 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball: no such file or directory
	I0307 13:46:27.598234    1598 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 13:46:27.601210    1598 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 13:46:27.604153    1598 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 13:46:27.608213    1598 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	W0307 13:46:27.614162    1598 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 13:46:27.614394    1598 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 13:46:27.619207    1598 out.go:97] Using the qemu2 driver based on user configuration
	I0307 13:46:27.619228    1598 start.go:297] selected driver: qemu2
	I0307 13:46:27.619244    1598 start.go:901] validating driver "qemu2" against <nil>
	I0307 13:46:27.619322    1598 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 13:46:27.621294    1598 out.go:169] Automatically selected the socket_vmnet network
	I0307 13:46:27.626906    1598 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0307 13:46:27.627022    1598 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 13:46:27.627117    1598 cni.go:84] Creating CNI manager for ""
	I0307 13:46:27.627135    1598 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 13:46:27.627187    1598 start.go:340] cluster config:
	{Name:download-only-510000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-510000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 13:46:27.633932    1598 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 13:46:27.637272    1598 out.go:97] Downloading VM boot image ...
	I0307 13:46:27.637294    1598 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso
	I0307 13:46:46.368678    1598 out.go:97] Starting "download-only-510000" primary control-plane node in "download-only-510000" cluster
	I0307 13:46:46.368728    1598 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 13:46:46.656445    1598 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 13:46:46.656524    1598 cache.go:56] Caching tarball of preloaded images
	I0307 13:46:46.657246    1598 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 13:46:46.662023    1598 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0307 13:46:46.662051    1598 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 13:46:47.271686    1598 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 13:47:07.206577    1598 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 13:47:07.206760    1598 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 13:47:07.904369    1598 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 13:47:07.904551    1598 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/download-only-510000/config.json ...
	I0307 13:47:07.904567    1598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/download-only-510000/config.json: {Name:mk7fd9b47ffcd4f27544635ead98cbb488fc03e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:47:07.904805    1598 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 13:47:07.905002    1598 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0307 13:47:08.608084    1598 out.go:169] 
	W0307 13:47:08.614137    1598 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108e570a0 0x108e570a0 0x108e570a0 0x108e570a0 0x108e570a0 0x108e570a0 0x108e570a0] Decompressors:map[bz2:0x14000898df0 gz:0x14000898df8 tar:0x14000898da0 tar.bz2:0x14000898db0 tar.gz:0x14000898dc0 tar.xz:0x14000898dd0 tar.zst:0x14000898de0 tbz2:0x14000898db0 tgz:0x14000898dc0 txz:0x14000898dd0 tzst:0x14000898de0 xz:0x14000898e00 zip:0x14000898e10 zst:0x14000898e08] Getters:map[file:0x140020d2570 http:0x140005fe230 https:0x140005fe280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0307 13:47:08.614161    1598 out_reason.go:110] 
	W0307 13:47:08.622061    1598 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 13:47:08.626108    1598 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-510000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (41.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.9s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-139000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-139000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.717821375s)

                                                
                                                
-- stdout --
	* [offline-docker-139000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-139000" primary control-plane node in "offline-docker-139000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-139000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:37:16.077207    3800 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:37:16.077368    3800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:37:16.077371    3800 out.go:304] Setting ErrFile to fd 2...
	I0307 14:37:16.077373    3800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:37:16.077512    3800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:37:16.078752    3800 out.go:298] Setting JSON to false
	I0307 14:37:16.096315    3800 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3993,"bootTime":1709847043,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:37:16.096391    3800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:37:16.102070    3800 out.go:177] * [offline-docker-139000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:37:16.109044    3800 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:37:16.109068    3800 notify.go:220] Checking for updates...
	I0307 14:37:16.116956    3800 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:37:16.119939    3800 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:37:16.122964    3800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:37:16.125969    3800 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:37:16.128995    3800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:37:16.132328    3800 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:37:16.132388    3800 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:37:16.135968    3800 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:37:16.143090    3800 start.go:297] selected driver: qemu2
	I0307 14:37:16.143112    3800 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:37:16.143135    3800 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:37:16.145309    3800 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:37:16.148936    3800 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:37:16.153051    3800 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:37:16.153095    3800 cni.go:84] Creating CNI manager for ""
	I0307 14:37:16.153103    3800 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:37:16.153107    3800 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 14:37:16.153148    3800 start.go:340] cluster config:
	{Name:offline-docker-139000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:37:16.157803    3800 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:37:16.165985    3800 out.go:177] * Starting "offline-docker-139000" primary control-plane node in "offline-docker-139000" cluster
	I0307 14:37:16.169972    3800 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:37:16.170004    3800 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:37:16.170017    3800 cache.go:56] Caching tarball of preloaded images
	I0307 14:37:16.170096    3800 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:37:16.170102    3800 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:37:16.170166    3800 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/offline-docker-139000/config.json ...
	I0307 14:37:16.170176    3800 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/offline-docker-139000/config.json: {Name:mkfba3fc04f4892683d01369be1c78b637f71930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:37:16.170457    3800 start.go:360] acquireMachinesLock for offline-docker-139000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:37:16.170488    3800 start.go:364] duration metric: took 21.875µs to acquireMachinesLock for "offline-docker-139000"
	I0307 14:37:16.170503    3800 start.go:93] Provisioning new machine with config: &{Name:offline-docker-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:37:16.170535    3800 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:37:16.174933    3800 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 14:37:16.190390    3800 start.go:159] libmachine.API.Create for "offline-docker-139000" (driver="qemu2")
	I0307 14:37:16.190423    3800 client.go:168] LocalClient.Create starting
	I0307 14:37:16.190488    3800 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:37:16.190521    3800 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:16.190530    3800 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:16.190574    3800 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:37:16.190594    3800 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:16.190598    3800 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:16.190993    3800 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:37:16.337499    3800 main.go:141] libmachine: Creating SSH key...
	I0307 14:37:16.391272    3800 main.go:141] libmachine: Creating Disk image...
	I0307 14:37:16.391279    3800 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:37:16.391453    3800 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/disk.qcow2
	I0307 14:37:16.404185    3800 main.go:141] libmachine: STDOUT: 
	I0307 14:37:16.404206    3800 main.go:141] libmachine: STDERR: 
	I0307 14:37:16.404266    3800 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/disk.qcow2 +20000M
	I0307 14:37:16.416644    3800 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:37:16.416679    3800 main.go:141] libmachine: STDERR: 
	I0307 14:37:16.416694    3800 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/disk.qcow2
	I0307 14:37:16.416697    3800 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:37:16.416736    3800 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:ad:d6:f5:ef:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/disk.qcow2
	I0307 14:37:16.418813    3800 main.go:141] libmachine: STDOUT: 
	I0307 14:37:16.418832    3800 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:37:16.418850    3800 client.go:171] duration metric: took 228.421667ms to LocalClient.Create
	I0307 14:37:18.420703    3800 start.go:128] duration metric: took 2.250155625s to createHost
	I0307 14:37:18.420719    3800 start.go:83] releasing machines lock for "offline-docker-139000", held for 2.250222959s
	W0307 14:37:18.420730    3800 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:37:18.429631    3800 out.go:177] * Deleting "offline-docker-139000" in qemu2 ...
	W0307 14:37:18.438875    3800 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:37:18.438883    3800 start.go:728] Will try again in 5 seconds ...
	I0307 14:37:23.441057    3800 start.go:360] acquireMachinesLock for offline-docker-139000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:37:23.441293    3800 start.go:364] duration metric: took 177.416µs to acquireMachinesLock for "offline-docker-139000"
	I0307 14:37:23.441351    3800 start.go:93] Provisioning new machine with config: &{Name:offline-docker-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:37:23.441487    3800 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:37:23.448891    3800 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 14:37:23.480910    3800 start.go:159] libmachine.API.Create for "offline-docker-139000" (driver="qemu2")
	I0307 14:37:23.480955    3800 client.go:168] LocalClient.Create starting
	I0307 14:37:23.481079    3800 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:37:23.481140    3800 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:23.481163    3800 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:23.481228    3800 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:37:23.481265    3800 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:23.481275    3800 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:23.481744    3800 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:37:23.629136    3800 main.go:141] libmachine: Creating SSH key...
	I0307 14:37:23.689467    3800 main.go:141] libmachine: Creating Disk image...
	I0307 14:37:23.689473    3800 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:37:23.689658    3800 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/disk.qcow2
	I0307 14:37:23.701736    3800 main.go:141] libmachine: STDOUT: 
	I0307 14:37:23.701756    3800 main.go:141] libmachine: STDERR: 
	I0307 14:37:23.701810    3800 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/disk.qcow2 +20000M
	I0307 14:37:23.717435    3800 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:37:23.717455    3800 main.go:141] libmachine: STDERR: 
	I0307 14:37:23.717467    3800 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/disk.qcow2
	I0307 14:37:23.717470    3800 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:37:23.717518    3800 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:c9:8a:bd:67:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/offline-docker-139000/disk.qcow2
	I0307 14:37:23.719265    3800 main.go:141] libmachine: STDOUT: 
	I0307 14:37:23.719282    3800 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:37:23.719294    3800 client.go:171] duration metric: took 238.333541ms to LocalClient.Create
	I0307 14:37:25.721475    3800 start.go:128] duration metric: took 2.279956291s to createHost
	I0307 14:37:25.721548    3800 start.go:83] releasing machines lock for "offline-docker-139000", held for 2.28023275s
	W0307 14:37:25.721966    3800 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-139000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-139000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:37:25.732561    3800 out.go:177] 
	W0307 14:37:25.736710    3800 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:37:25.736749    3800 out.go:239] * 
	* 
	W0307 14:37:25.739812    3800 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:37:25.748523    3800 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-139000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-07 14:37:25.765114 -0800 PST m=+3058.260611460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-139000 -n offline-docker-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-139000 -n offline-docker-139000: exit status 7 (69.880792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-139000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-139000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-139000
--- FAIL: TestOffline (9.90s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (34.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-263000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-263000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-263000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6f929bbb-6156-470d-bcd8-346bf97fc817] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6f929bbb-6156-470d-bcd8-346bf97fc817] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003831292s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-263000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-263000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-263000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.2: exit status 1 (15.035680916s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-darwin-arm64 -p addons-263000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 -p addons-263000 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-darwin-arm64 -p addons-263000 addons disable ingress --alsologtostderr -v=1: (7.214449792s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-263000 -n addons-263000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-263000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 07 Mar 24 13:48 PST | 07 Mar 24 13:48 PST |
	| delete  | -p download-only-304000                                                                     | download-only-304000 | jenkins | v1.32.0 | 07 Mar 24 13:48 PST | 07 Mar 24 13:48 PST |
	| delete  | -p download-only-510000                                                                     | download-only-510000 | jenkins | v1.32.0 | 07 Mar 24 13:48 PST | 07 Mar 24 13:48 PST |
	| delete  | -p download-only-973000                                                                     | download-only-973000 | jenkins | v1.32.0 | 07 Mar 24 13:48 PST | 07 Mar 24 13:48 PST |
	| delete  | -p download-only-304000                                                                     | download-only-304000 | jenkins | v1.32.0 | 07 Mar 24 13:48 PST | 07 Mar 24 13:48 PST |
	| start   | --download-only -p                                                                          | binary-mirror-773000 | jenkins | v1.32.0 | 07 Mar 24 13:48 PST |                     |
	|         | binary-mirror-773000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49328                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-773000                                                                     | binary-mirror-773000 | jenkins | v1.32.0 | 07 Mar 24 13:48 PST | 07 Mar 24 13:48 PST |
	| addons  | enable dashboard -p                                                                         | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:48 PST |                     |
	|         | addons-263000                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:48 PST |                     |
	|         | addons-263000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-263000 --wait=true                                                                | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:48 PST | 07 Mar 24 13:52 PST |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                                                |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                                           |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| ip      | addons-263000 ip                                                                            | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:52 PST | 07 Mar 24 13:52 PST |
	| addons  | addons-263000 addons disable                                                                | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:52 PST | 07 Mar 24 13:52 PST |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:52 PST | 07 Mar 24 13:52 PST |
	|         | -p addons-263000                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-263000 ssh cat                                                                       | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:52 PST | 07 Mar 24 13:52 PST |
	|         | /opt/local-path-provisioner/pvc-8066f6fb-2a0d-414c-ad66-0e0e6c420276_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-263000 addons disable                                                                | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:52 PST | 07 Mar 24 13:53 PST |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-263000 addons                                                                        | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:52 PST | 07 Mar 24 13:52 PST |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-263000 addons                                                                        | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:52 PST | 07 Mar 24 13:52 PST |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:53 PST | 07 Mar 24 13:53 PST |
	|         | addons-263000                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:53 PST | 07 Mar 24 13:53 PST |
	|         | -p addons-263000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-263000 addons                                                                        | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:53 PST | 07 Mar 24 13:53 PST |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:53 PST | 07 Mar 24 13:53 PST |
	|         | addons-263000                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-263000 ssh curl -s                                                                   | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:53 PST | 07 Mar 24 13:53 PST |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-263000 ip                                                                            | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:53 PST | 07 Mar 24 13:53 PST |
	| addons  | addons-263000 addons disable                                                                | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:53 PST | 07 Mar 24 13:53 PST |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-263000 addons disable                                                                | addons-263000        | jenkins | v1.32.0 | 07 Mar 24 13:53 PST | 07 Mar 24 13:53 PST |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 13:48:51
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 13:48:51.690647    1772 out.go:291] Setting OutFile to fd 1 ...
	I0307 13:48:51.690770    1772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:48:51.690773    1772 out.go:304] Setting ErrFile to fd 2...
	I0307 13:48:51.690776    1772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:48:51.690918    1772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 13:48:51.691994    1772 out.go:298] Setting JSON to false
	I0307 13:48:51.708231    1772 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1088,"bootTime":1709847043,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 13:48:51.708284    1772 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 13:48:51.712035    1772 out.go:177] * [addons-263000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 13:48:51.718997    1772 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 13:48:51.722955    1772 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 13:48:51.719039    1772 notify.go:220] Checking for updates...
	I0307 13:48:51.729028    1772 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 13:48:51.731959    1772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 13:48:51.734960    1772 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 13:48:51.738003    1772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 13:48:51.741052    1772 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 13:48:51.744945    1772 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 13:48:51.751989    1772 start.go:297] selected driver: qemu2
	I0307 13:48:51.751995    1772 start.go:901] validating driver "qemu2" against <nil>
	I0307 13:48:51.752002    1772 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 13:48:51.754334    1772 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 13:48:51.757984    1772 out.go:177] * Automatically selected the socket_vmnet network
	I0307 13:48:51.761087    1772 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 13:48:51.761147    1772 cni.go:84] Creating CNI manager for ""
	I0307 13:48:51.761154    1772 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 13:48:51.761158    1772 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 13:48:51.761198    1772 start.go:340] cluster config:
	{Name:addons-263000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 13:48:51.765589    1772 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 13:48:51.774001    1772 out.go:177] * Starting "addons-263000" primary control-plane node in "addons-263000" cluster
	I0307 13:48:51.777833    1772 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 13:48:51.777847    1772 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 13:48:51.777857    1772 cache.go:56] Caching tarball of preloaded images
	I0307 13:48:51.777913    1772 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 13:48:51.777919    1772 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 13:48:51.778150    1772 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/config.json ...
	I0307 13:48:51.778163    1772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/config.json: {Name:mk981e2c99a568cbf3edd7800e56bb7d9477bf42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:48:51.778402    1772 start.go:360] acquireMachinesLock for addons-263000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 13:48:51.778561    1772 start.go:364] duration metric: took 153µs to acquireMachinesLock for "addons-263000"
	I0307 13:48:51.778572    1772 start.go:93] Provisioning new machine with config: &{Name:addons-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 13:48:51.778601    1772 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 13:48:51.783018    1772 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0307 13:48:52.023896    1772 start.go:159] libmachine.API.Create for "addons-263000" (driver="qemu2")
	I0307 13:48:52.023939    1772 client.go:168] LocalClient.Create starting
	I0307 13:48:52.024119    1772 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 13:48:52.070109    1772 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 13:48:52.156957    1772 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 13:48:52.661525    1772 main.go:141] libmachine: Creating SSH key...
	I0307 13:48:52.856239    1772 main.go:141] libmachine: Creating Disk image...
	I0307 13:48:52.856248    1772 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 13:48:52.856509    1772 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/disk.qcow2
	I0307 13:48:52.879100    1772 main.go:141] libmachine: STDOUT: 
	I0307 13:48:52.879123    1772 main.go:141] libmachine: STDERR: 
	I0307 13:48:52.879172    1772 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/disk.qcow2 +20000M
	I0307 13:48:52.889878    1772 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 13:48:52.889898    1772 main.go:141] libmachine: STDERR: 
	I0307 13:48:52.889915    1772 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/disk.qcow2
	I0307 13:48:52.889922    1772 main.go:141] libmachine: Starting QEMU VM...
	I0307 13:48:52.889953    1772 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:f9:e2:f9:31:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/disk.qcow2
	I0307 13:48:52.947016    1772 main.go:141] libmachine: STDOUT: 
	I0307 13:48:52.947069    1772 main.go:141] libmachine: STDERR: 
	I0307 13:48:52.947073    1772 main.go:141] libmachine: Attempt 0
	I0307 13:48:52.947093    1772 main.go:141] libmachine: Searching for 3a:f9:e2:f9:31:e8 in /var/db/dhcpd_leases ...
	I0307 13:48:52.947142    1772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 13:48:52.947162    1772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x65eb8718}
	I0307 13:48:54.949278    1772 main.go:141] libmachine: Attempt 1
	I0307 13:48:54.949358    1772 main.go:141] libmachine: Searching for 3a:f9:e2:f9:31:e8 in /var/db/dhcpd_leases ...
	I0307 13:48:54.949704    1772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 13:48:54.949754    1772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x65eb8718}
	I0307 13:48:56.952027    1772 main.go:141] libmachine: Attempt 2
	I0307 13:48:56.952169    1772 main.go:141] libmachine: Searching for 3a:f9:e2:f9:31:e8 in /var/db/dhcpd_leases ...
	I0307 13:48:56.952491    1772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 13:48:56.952541    1772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x65eb8718}
	I0307 13:48:58.954669    1772 main.go:141] libmachine: Attempt 3
	I0307 13:48:58.954694    1772 main.go:141] libmachine: Searching for 3a:f9:e2:f9:31:e8 in /var/db/dhcpd_leases ...
	I0307 13:48:58.954788    1772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 13:48:58.954814    1772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x65eb8718}
	I0307 13:49:00.956820    1772 main.go:141] libmachine: Attempt 4
	I0307 13:49:00.956831    1772 main.go:141] libmachine: Searching for 3a:f9:e2:f9:31:e8 in /var/db/dhcpd_leases ...
	I0307 13:49:00.956876    1772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 13:49:00.956884    1772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x65eb8718}
	I0307 13:49:02.958922    1772 main.go:141] libmachine: Attempt 5
	I0307 13:49:02.958930    1772 main.go:141] libmachine: Searching for 3a:f9:e2:f9:31:e8 in /var/db/dhcpd_leases ...
	I0307 13:49:02.958958    1772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 13:49:02.958964    1772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x65eb8718}
	I0307 13:49:04.960976    1772 main.go:141] libmachine: Attempt 6
	I0307 13:49:04.960991    1772 main.go:141] libmachine: Searching for 3a:f9:e2:f9:31:e8 in /var/db/dhcpd_leases ...
	I0307 13:49:04.961053    1772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 13:49:04.961073    1772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x65eb8718}
	I0307 13:49:06.963090    1772 main.go:141] libmachine: Attempt 7
	I0307 13:49:06.963115    1772 main.go:141] libmachine: Searching for 3a:f9:e2:f9:31:e8 in /var/db/dhcpd_leases ...
	I0307 13:49:06.963211    1772 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0307 13:49:06.963223    1772 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3a:f9:e2:f9:31:e8 ID:1,3a:f9:e2:f9:31:e8 Lease:0x65eb87d1}
	I0307 13:49:06.963225    1772 main.go:141] libmachine: Found match: 3a:f9:e2:f9:31:e8
	I0307 13:49:06.963234    1772 main.go:141] libmachine: IP: 192.168.105.2
	I0307 13:49:06.963238    1772 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0307 13:49:08.984478    1772 machine.go:94] provisionDockerMachine start ...
	I0307 13:49:08.986017    1772 main.go:141] libmachine: Using SSH client type: native
	I0307 13:49:08.986526    1772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a3da30] 0x100a40290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 13:49:08.986545    1772 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 13:49:09.053903    1772 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 13:49:09.053947    1772 buildroot.go:166] provisioning hostname "addons-263000"
	I0307 13:49:09.054027    1772 main.go:141] libmachine: Using SSH client type: native
	I0307 13:49:09.054200    1772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a3da30] 0x100a40290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 13:49:09.054209    1772 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-263000 && echo "addons-263000" | sudo tee /etc/hostname
	I0307 13:49:09.116097    1772 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-263000
	
	I0307 13:49:09.116162    1772 main.go:141] libmachine: Using SSH client type: native
	I0307 13:49:09.116283    1772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a3da30] 0x100a40290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 13:49:09.116292    1772 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-263000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-263000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-263000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 13:49:09.168249    1772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 13:49:09.168269    1772 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18320-1153/.minikube CaCertPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18320-1153/.minikube}
	I0307 13:49:09.168281    1772 buildroot.go:174] setting up certificates
	I0307 13:49:09.168290    1772 provision.go:84] configureAuth start
	I0307 13:49:09.168293    1772 provision.go:143] copyHostCerts
	I0307 13:49:09.168411    1772 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.pem (1078 bytes)
	I0307 13:49:09.168678    1772 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18320-1153/.minikube/cert.pem (1123 bytes)
	I0307 13:49:09.168823    1772 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18320-1153/.minikube/key.pem (1679 bytes)
	I0307 13:49:09.168938    1772 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca-key.pem org=jenkins.addons-263000 san=[127.0.0.1 192.168.105.2 addons-263000 localhost minikube]
	I0307 13:49:09.339135    1772 provision.go:177] copyRemoteCerts
	I0307 13:49:09.339201    1772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 13:49:09.339222    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:09.367085    1772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 13:49:09.375699    1772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0307 13:49:09.383991    1772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 13:49:09.395598    1772 provision.go:87] duration metric: took 227.306417ms to configureAuth
	I0307 13:49:09.395606    1772 buildroot.go:189] setting minikube options for container-runtime
	I0307 13:49:09.395692    1772 config.go:182] Loaded profile config "addons-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 13:49:09.395724    1772 main.go:141] libmachine: Using SSH client type: native
	I0307 13:49:09.395801    1772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a3da30] 0x100a40290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 13:49:09.395805    1772 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 13:49:09.444765    1772 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 13:49:09.444773    1772 buildroot.go:70] root file system type: tmpfs
	I0307 13:49:09.444828    1772 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 13:49:09.444884    1772 main.go:141] libmachine: Using SSH client type: native
	I0307 13:49:09.444983    1772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a3da30] 0x100a40290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 13:49:09.445018    1772 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 13:49:09.496195    1772 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 13:49:09.496232    1772 main.go:141] libmachine: Using SSH client type: native
	I0307 13:49:09.496340    1772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a3da30] 0x100a40290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 13:49:09.496349    1772 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 13:49:09.858159    1772 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 13:49:09.858176    1772 machine.go:97] duration metric: took 873.684042ms to provisionDockerMachine
	I0307 13:49:09.858182    1772 client.go:171] duration metric: took 17.83463725s to LocalClient.Create
	I0307 13:49:09.858196    1772 start.go:167] duration metric: took 17.834704625s to libmachine.API.Create "addons-263000"
	I0307 13:49:09.858200    1772 start.go:293] postStartSetup for "addons-263000" (driver="qemu2")
	I0307 13:49:09.858206    1772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 13:49:09.858290    1772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 13:49:09.858300    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:09.883041    1772 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 13:49:09.884410    1772 info.go:137] Remote host: Buildroot 2023.02.9
	I0307 13:49:09.884419    1772 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18320-1153/.minikube/addons for local assets ...
	I0307 13:49:09.884492    1772 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18320-1153/.minikube/files for local assets ...
	I0307 13:49:09.884520    1772 start.go:296] duration metric: took 26.318041ms for postStartSetup
	I0307 13:49:09.884895    1772 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/config.json ...
	I0307 13:49:09.885077    1772 start.go:128] duration metric: took 18.106876208s to createHost
	I0307 13:49:09.885096    1772 main.go:141] libmachine: Using SSH client type: native
	I0307 13:49:09.885173    1772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a3da30] 0x100a40290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 13:49:09.885177    1772 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 13:49:09.928081    1772 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709848149.818154752
	
	I0307 13:49:09.928087    1772 fix.go:216] guest clock: 1709848149.818154752
	I0307 13:49:09.928091    1772 fix.go:229] Guest: 2024-03-07 13:49:09.818154752 -0800 PST Remote: 2024-03-07 13:49:09.885079 -0800 PST m=+18.216039542 (delta=-66.924248ms)
	I0307 13:49:09.928102    1772 fix.go:200] guest clock delta is within tolerance: -66.924248ms
	I0307 13:49:09.928105    1772 start.go:83] releasing machines lock for "addons-263000", held for 18.149944542s
	I0307 13:49:09.928394    1772 ssh_runner.go:195] Run: cat /version.json
	I0307 13:49:09.928405    1772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 13:49:09.928404    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:09.928420    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:09.950946    1772 ssh_runner.go:195] Run: systemctl --version
	I0307 13:49:09.953389    1772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 13:49:10.086561    1772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 13:49:10.086663    1772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 13:49:10.096697    1772 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 13:49:10.096710    1772 start.go:494] detecting cgroup driver to use...
	I0307 13:49:10.096955    1772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 13:49:10.107555    1772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 13:49:10.112726    1772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 13:49:10.117250    1772 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 13:49:10.117285    1772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 13:49:10.122167    1772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 13:49:10.126464    1772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 13:49:10.130674    1772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 13:49:10.134807    1772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 13:49:10.138551    1772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 13:49:10.142498    1772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 13:49:10.146391    1772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 13:49:10.149875    1772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 13:49:10.234313    1772 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 13:49:10.241430    1772 start.go:494] detecting cgroup driver to use...
	I0307 13:49:10.241511    1772 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 13:49:10.249153    1772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 13:49:10.254573    1772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 13:49:10.260958    1772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 13:49:10.265936    1772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 13:49:10.271136    1772 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 13:49:10.311547    1772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 13:49:10.317705    1772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 13:49:10.323907    1772 ssh_runner.go:195] Run: which cri-dockerd
	I0307 13:49:10.325517    1772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 13:49:10.328627    1772 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 13:49:10.334403    1772 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 13:49:10.419478    1772 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 13:49:10.489958    1772 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 13:49:10.490023    1772 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 13:49:10.496051    1772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 13:49:10.580368    1772 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 13:49:11.735851    1772 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.155491916s)
	I0307 13:49:11.735928    1772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 13:49:11.741649    1772 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 13:49:11.748262    1772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 13:49:11.753739    1772 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 13:49:11.827535    1772 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 13:49:11.890185    1772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 13:49:11.955554    1772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 13:49:11.962175    1772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 13:49:11.967427    1772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 13:49:12.032201    1772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 13:49:12.055019    1772 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 13:49:12.055112    1772 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 13:49:12.058332    1772 start.go:562] Will wait 60s for crictl version
	I0307 13:49:12.058376    1772 ssh_runner.go:195] Run: which crictl
	I0307 13:49:12.059878    1772 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 13:49:12.084561    1772 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0307 13:49:12.084624    1772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 13:49:12.096361    1772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 13:49:12.113254    1772 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0307 13:49:12.113402    1772 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0307 13:49:12.115092    1772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 13:49:12.119556    1772 kubeadm.go:877] updating cluster {Name:addons-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:addons-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 13:49:12.119602    1772 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 13:49:12.119640    1772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 13:49:12.129157    1772 docker.go:685] Got preloaded images: 
	I0307 13:49:12.129165    1772 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0307 13:49:12.129207    1772 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 13:49:12.132622    1772 ssh_runner.go:195] Run: which lz4
	I0307 13:49:12.134077    1772 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0307 13:49:12.135388    1772 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 13:49:12.135400    1772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (357941720 bytes)
	I0307 13:49:13.379019    1772 docker.go:649] duration metric: took 1.245010375s to copy over tarball
	I0307 13:49:13.379075    1772 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 13:49:14.447938    1772 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.068872625s)
	I0307 13:49:14.447951    1772 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 13:49:14.464045    1772 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 13:49:14.467786    1772 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0307 13:49:14.473838    1772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 13:49:14.561626    1772 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 13:49:16.191849    1772 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.630241292s)
	I0307 13:49:16.191936    1772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 13:49:16.198256    1772 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 13:49:16.198268    1772 cache_images.go:84] Images are preloaded, skipping loading
	I0307 13:49:16.198274    1772 kubeadm.go:928] updating node { 192.168.105.2 8443 v1.28.4 docker true true} ...
	I0307 13:49:16.198332    1772 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-263000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 13:49:16.198391    1772 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 13:49:16.206203    1772 cni.go:84] Creating CNI manager for ""
	I0307 13:49:16.206214    1772 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 13:49:16.206220    1772 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 13:49:16.206229    1772 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-263000 NodeName:addons-263000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 13:49:16.206297    1772 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-263000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 13:49:16.206357    1772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 13:49:16.210649    1772 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 13:49:16.210685    1772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 13:49:16.214516    1772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0307 13:49:16.221325    1772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 13:49:16.227110    1772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0307 13:49:16.233155    1772 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0307 13:49:16.234520    1772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 13:49:16.238699    1772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 13:49:16.326067    1772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 13:49:16.335871    1772 certs.go:68] Setting up /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000 for IP: 192.168.105.2
	I0307 13:49:16.335880    1772 certs.go:194] generating shared ca certs ...
	I0307 13:49:16.335889    1772 certs.go:226] acquiring lock for ca certs: {Name:mkd070f5c39ed755a49409426e8106f4ed85635d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:49:16.336068    1772 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.key
	I0307 13:49:16.384044    1772 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt ...
	I0307 13:49:16.384054    1772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt: {Name:mkb2fa43b86e56f28611cf39cf6b1e2c2a323232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:49:16.384345    1772 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.key ...
	I0307 13:49:16.384349    1772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.key: {Name:mkedd90b99d72ce98dfa64f2cdd371e2ee67886f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:49:16.384489    1772 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.key
	I0307 13:49:16.497532    1772 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.crt ...
	I0307 13:49:16.497539    1772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.crt: {Name:mk1768fc05e0a57fdc323a669a59da19ac854bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:49:16.497696    1772 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.key ...
	I0307 13:49:16.497699    1772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.key: {Name:mk09f27376294eef33fb3442ff9e0e7e802f1781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:49:16.497816    1772 certs.go:256] generating profile certs ...
	I0307 13:49:16.497853    1772 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.key
	I0307 13:49:16.497859    1772 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt with IP's: []
	I0307 13:49:16.684173    1772 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt ...
	I0307 13:49:16.684179    1772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: {Name:mka2c972577c2ade47674292d12b392145986842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:49:16.684411    1772 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.key ...
	I0307 13:49:16.684414    1772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.key: {Name:mk1b0f568f3b819c91a4f3dce4fbee7ed40ace1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:49:16.684537    1772 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/apiserver.key.898a7cc9
	I0307 13:49:16.684553    1772 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/apiserver.crt.898a7cc9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0307 13:49:16.916918    1772 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/apiserver.crt.898a7cc9 ...
	I0307 13:49:16.916938    1772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/apiserver.crt.898a7cc9: {Name:mk93bd8f73641fb441ebd53279becce6b7fa258a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:49:16.917259    1772 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/apiserver.key.898a7cc9 ...
	I0307 13:49:16.917263    1772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/apiserver.key.898a7cc9: {Name:mk5dd3656a95c01f9c46db6e32a2ec7a62c50027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:49:16.917387    1772 certs.go:381] copying /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/apiserver.crt.898a7cc9 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/apiserver.crt
	I0307 13:49:16.917663    1772 certs.go:385] copying /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/apiserver.key.898a7cc9 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/apiserver.key
	I0307 13:49:16.917810    1772 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/proxy-client.key
	I0307 13:49:16.917825    1772 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/proxy-client.crt with IP's: []
	I0307 13:49:16.972656    1772 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/proxy-client.crt ...
	I0307 13:49:16.972661    1772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/proxy-client.crt: {Name:mk61502a61542f87f0e2502e25b2d4a4aaddbea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:49:16.972824    1772 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/proxy-client.key ...
	I0307 13:49:16.972827    1772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/proxy-client.key: {Name:mk3a99bbb1e258c6205e32ae77c51a35fb0aa4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:49:16.973071    1772 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 13:49:16.973095    1772 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem (1078 bytes)
	I0307 13:49:16.973113    1772 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem (1123 bytes)
	I0307 13:49:16.973130    1772 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/key.pem (1679 bytes)
	I0307 13:49:16.973437    1772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 13:49:16.984019    1772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 13:49:16.993336    1772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 13:49:17.002445    1772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 13:49:17.013022    1772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0307 13:49:17.021377    1772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 13:49:17.029664    1772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 13:49:17.038126    1772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 13:49:17.046612    1772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 13:49:17.055364    1772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 13:49:17.062339    1772 ssh_runner.go:195] Run: openssl version
	I0307 13:49:17.064576    1772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 13:49:17.068250    1772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 13:49:17.069784    1772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I0307 13:49:17.069807    1772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 13:49:17.071697    1772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 13:49:17.075576    1772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 13:49:17.077092    1772 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 13:49:17.077115    1772 kubeadm.go:391] StartCluster: {Name:addons-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:addons-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 13:49:17.077183    1772 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 13:49:17.085320    1772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 13:49:17.088753    1772 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 13:49:17.092071    1772 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 13:49:17.095511    1772 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 13:49:17.095518    1772 kubeadm.go:156] found existing configuration files:
	
	I0307 13:49:17.095541    1772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0307 13:49:17.098940    1772 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 13:49:17.098962    1772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 13:49:17.102480    1772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0307 13:49:17.105681    1772 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 13:49:17.105708    1772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 13:49:17.108908    1772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0307 13:49:17.111880    1772 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 13:49:17.111907    1772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 13:49:17.115310    1772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0307 13:49:17.118795    1772 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 13:49:17.118820    1772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 13:49:17.122135    1772 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 13:49:17.144663    1772 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0307 13:49:17.144721    1772 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 13:49:17.197949    1772 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 13:49:17.198019    1772 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 13:49:17.198075    1772 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 13:49:17.293245    1772 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 13:49:17.306437    1772 out.go:204]   - Generating certificates and keys ...
	I0307 13:49:17.306477    1772 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 13:49:17.306508    1772 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 13:49:17.477993    1772 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 13:49:17.549493    1772 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0307 13:49:17.666345    1772 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0307 13:49:17.753667    1772 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0307 13:49:18.109627    1772 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0307 13:49:18.109694    1772 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-263000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0307 13:49:18.207352    1772 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0307 13:49:18.207419    1772 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-263000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0307 13:49:18.289678    1772 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 13:49:18.400435    1772 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 13:49:18.484761    1772 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0307 13:49:18.484790    1772 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 13:49:18.527573    1772 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 13:49:18.574712    1772 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 13:49:18.667176    1772 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 13:49:18.815839    1772 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 13:49:18.816023    1772 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 13:49:18.817114    1772 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 13:49:18.820784    1772 out.go:204]   - Booting up control plane ...
	I0307 13:49:18.820831    1772 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 13:49:18.820879    1772 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 13:49:18.820947    1772 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 13:49:18.824696    1772 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 13:49:18.824748    1772 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 13:49:18.824768    1772 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 13:49:18.900193    1772 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 13:49:22.401114    1772 kubeadm.go:309] [apiclient] All control plane components are healthy after 3.500980 seconds
	I0307 13:49:22.401178    1772 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 13:49:22.408467    1772 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 13:49:22.917382    1772 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 13:49:22.917483    1772 kubeadm.go:309] [mark-control-plane] Marking the node addons-263000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 13:49:23.422255    1772 kubeadm.go:309] [bootstrap-token] Using token: nkhozd.3tkib72dcs4xrktx
	I0307 13:49:23.428429    1772 out.go:204]   - Configuring RBAC rules ...
	I0307 13:49:23.428487    1772 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 13:49:23.429398    1772 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 13:49:23.433055    1772 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 13:49:23.434156    1772 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 13:49:23.435719    1772 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 13:49:23.437046    1772 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 13:49:23.440618    1772 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 13:49:23.612878    1772 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 13:49:23.831870    1772 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 13:49:23.832269    1772 kubeadm.go:309] 
	I0307 13:49:23.832299    1772 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 13:49:23.832309    1772 kubeadm.go:309] 
	I0307 13:49:23.832357    1772 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 13:49:23.832363    1772 kubeadm.go:309] 
	I0307 13:49:23.832380    1772 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 13:49:23.832410    1772 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 13:49:23.832437    1772 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 13:49:23.832440    1772 kubeadm.go:309] 
	I0307 13:49:23.832472    1772 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 13:49:23.832476    1772 kubeadm.go:309] 
	I0307 13:49:23.832507    1772 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 13:49:23.832510    1772 kubeadm.go:309] 
	I0307 13:49:23.832543    1772 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 13:49:23.832580    1772 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 13:49:23.832612    1772 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 13:49:23.832615    1772 kubeadm.go:309] 
	I0307 13:49:23.832672    1772 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 13:49:23.832719    1772 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 13:49:23.832721    1772 kubeadm.go:309] 
	I0307 13:49:23.832780    1772 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token nkhozd.3tkib72dcs4xrktx \
	I0307 13:49:23.832841    1772 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dc223c8f61ede9166cd6af749ad18625b70792e87215b933ce92e49a8a2f5e5b \
	I0307 13:49:23.832852    1772 kubeadm.go:309] 	--control-plane 
	I0307 13:49:23.832856    1772 kubeadm.go:309] 
	I0307 13:49:23.832920    1772 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 13:49:23.832927    1772 kubeadm.go:309] 
	I0307 13:49:23.832985    1772 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token nkhozd.3tkib72dcs4xrktx \
	I0307 13:49:23.833033    1772 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dc223c8f61ede9166cd6af749ad18625b70792e87215b933ce92e49a8a2f5e5b 
	I0307 13:49:23.833113    1772 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 13:49:23.833122    1772 cni.go:84] Creating CNI manager for ""
	I0307 13:49:23.833132    1772 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 13:49:23.837085    1772 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 13:49:23.845100    1772 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 13:49:23.848920    1772 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 13:49:23.854814    1772 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 13:49:23.854869    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:23.854879    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-263000 minikube.k8s.io/updated_at=2024_03_07T13_49_23_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3e3656b8cff33aafa60dd2a07a4b34bce666a6a6 minikube.k8s.io/name=addons-263000 minikube.k8s.io/primary=true
	I0307 13:49:23.858918    1772 ops.go:34] apiserver oom_adj: -16
	I0307 13:49:23.915951    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:24.418032    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:24.918058    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:25.418049    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:25.917980    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:26.417653    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:26.916907    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:27.417986    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:27.917956    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:28.417955    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:28.917934    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:29.417903    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:29.917915    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:30.417913    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:30.917901    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:31.417879    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:31.917879    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:32.417871    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:32.917854    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:33.417843    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:33.917802    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:34.417815    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:34.917789    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:35.417767    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:35.917737    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:36.417793    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:36.917721    1772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 13:49:36.952408    1772 kubeadm.go:1106] duration metric: took 13.097878958s to wait for elevateKubeSystemPrivileges
	W0307 13:49:36.952447    1772 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 13:49:36.952453    1772 kubeadm.go:393] duration metric: took 19.875784166s to StartCluster
	I0307 13:49:36.952461    1772 settings.go:142] acquiring lock: {Name:mke1b21879aa30c53aba22e595b1eacff0880bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:49:36.952622    1772 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 13:49:36.952889    1772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/kubeconfig: {Name:mk8faa4add6483df163f8e33c2463984a39a544c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:49:36.953116    1772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 13:49:36.953134    1772 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 13:49:36.956317    1772 out.go:177] * Verifying Kubernetes components...
	I0307 13:49:36.953170    1772 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0307 13:49:36.956343    1772 addons.go:69] Setting default-storageclass=true in profile "addons-263000"
	I0307 13:49:36.956352    1772 addons.go:69] Setting ingress=true in profile "addons-263000"
	I0307 13:49:36.956357    1772 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-263000"
	I0307 13:49:36.956360    1772 addons.go:234] Setting addon ingress=true in "addons-263000"
	I0307 13:49:36.956373    1772 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-263000"
	I0307 13:49:36.956378    1772 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-263000"
	I0307 13:49:36.956389    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.956396    1772 addons.go:69] Setting storage-provisioner=true in profile "addons-263000"
	I0307 13:49:36.956419    1772 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-263000"
	I0307 13:49:36.956432    1772 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-263000"
	I0307 13:49:36.956443    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.956392    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.956474    1772 addons.go:69] Setting registry=true in profile "addons-263000"
	I0307 13:49:36.956481    1772 addons.go:69] Setting ingress-dns=true in profile "addons-263000"
	I0307 13:49:36.956496    1772 addons.go:234] Setting addon ingress-dns=true in "addons-263000"
	I0307 13:49:36.956506    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.956511    1772 addons.go:234] Setting addon registry=true in "addons-263000"
	I0307 13:49:36.956441    1772 addons.go:69] Setting inspektor-gadget=true in profile "addons-263000"
	I0307 13:49:36.956557    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.956566    1772 addons.go:234] Setting addon inspektor-gadget=true in "addons-263000"
	I0307 13:49:36.956631    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.956700    1772 addons.go:69] Setting metrics-server=true in profile "addons-263000"
	I0307 13:49:36.956711    1772 retry.go:31] will retry after 1.000364067s: connect: dial unix /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/monitor: connect: connection refused
	I0307 13:49:36.956344    1772 addons.go:69] Setting yakd=true in profile "addons-263000"
	I0307 13:49:36.956744    1772 addons.go:234] Setting addon yakd=true in "addons-263000"
	I0307 13:49:36.956753    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.956718    1772 addons.go:234] Setting addon metrics-server=true in "addons-263000"
	I0307 13:49:36.956769    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.956806    1772 retry.go:31] will retry after 1.153881273s: connect: dial unix /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/monitor: connect: connection refused
	I0307 13:49:36.956837    1772 retry.go:31] will retry after 839.062794ms: connect: dial unix /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/monitor: connect: connection refused
	I0307 13:49:36.956351    1772 addons.go:69] Setting cloud-spanner=true in profile "addons-263000"
	I0307 13:49:36.956920    1772 addons.go:234] Setting addon cloud-spanner=true in "addons-263000"
	I0307 13:49:36.953269    1772 config.go:182] Loaded profile config "addons-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 13:49:36.956960    1772 retry.go:31] will retry after 837.287826ms: connect: dial unix /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/monitor: connect: connection refused
	I0307 13:49:36.956447    1772 addons.go:234] Setting addon storage-provisioner=true in "addons-263000"
	I0307 13:49:36.956968    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.956977    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.956982    1772 retry.go:31] will retry after 820.041814ms: connect: dial unix /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/monitor: connect: connection refused
	I0307 13:49:36.956722    1772 addons.go:69] Setting volumesnapshots=true in profile "addons-263000"
	I0307 13:49:36.957042    1772 retry.go:31] will retry after 907.24251ms: connect: dial unix /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/monitor: connect: connection refused
	I0307 13:49:36.957047    1772 addons.go:234] Setting addon volumesnapshots=true in "addons-263000"
	I0307 13:49:36.957060    1772 retry.go:31] will retry after 802.655369ms: connect: dial unix /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/monitor: connect: connection refused
	I0307 13:49:36.957079    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.956725    1772 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-263000"
	I0307 13:49:36.956349    1772 addons.go:69] Setting gcp-auth=true in profile "addons-263000"
	I0307 13:49:36.963150    1772 mustload.go:65] Loading cluster: addons-263000
	I0307 13:49:36.963159    1772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 13:49:36.957110    1772 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-263000"
	I0307 13:49:36.963221    1772 config.go:182] Loaded profile config "addons-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 13:49:36.957173    1772 retry.go:31] will retry after 575.438781ms: connect: dial unix /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/monitor: connect: connection refused
	I0307 13:49:36.957348    1772 retry.go:31] will retry after 1.022295219s: connect: dial unix /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/monitor: connect: connection refused
	I0307 13:49:36.957366    1772 retry.go:31] will retry after 827.696037ms: connect: dial unix /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/monitor: connect: connection refused
	I0307 13:49:36.958780    1772 addons.go:234] Setting addon default-storageclass=true in "addons-263000"
	I0307 13:49:36.963241    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.966216    1772 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0307 13:49:36.963481    1772 retry.go:31] will retry after 847.749927ms: connect: dial unix /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/monitor: connect: connection refused
	I0307 13:49:36.964153    1772 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-263000"
	I0307 13:49:36.964348    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.970170    1772 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 13:49:36.970176    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0307 13:49:36.970185    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:36.970237    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:36.975162    1772 out.go:177]   - Using image docker.io/busybox:stable
	I0307 13:49:36.984093    1772 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0307 13:49:36.990186    1772 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 13:49:36.990196    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0307 13:49:36.990205    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:37.018442    1772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0307 13:49:37.083951    1772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 13:49:37.141815    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 13:49:37.163931    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 13:49:37.545249    1772 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 13:49:37.549134    1772 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 13:49:37.549140    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 13:49:37.549150    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:37.708607    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 13:49:37.765221    1772 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 13:49:37.769175    1772 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 13:49:37.773120    1772 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0307 13:49:37.777240    1772 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 13:49:37.777248    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0307 13:49:37.777259    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:37.777491    1772 start.go:948] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0307 13:49:37.777898    1772 node_ready.go:35] waiting up to 6m0s for node "addons-263000" to be "Ready" ...
	I0307 13:49:37.782017    1772 out.go:177]   - Using image docker.io/registry:2.8.3
	I0307 13:49:37.786141    1772 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0307 13:49:37.790242    1772 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0307 13:49:37.790250    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0307 13:49:37.790260    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:37.790785    1772 node_ready.go:49] node "addons-263000" has status "Ready":"True"
	I0307 13:49:37.790804    1772 node_ready.go:38] duration metric: took 12.88475ms for node "addons-263000" to be "Ready" ...
	I0307 13:49:37.790822    1772 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 13:49:37.795059    1772 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0307 13:49:37.799233    1772 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0307 13:49:37.799249    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0307 13:49:37.799265    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:37.799641    1772 retry.go:31] will retry after 2.086535579s: connect: dial unix /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/monitor: connect: connection refused
	I0307 13:49:37.803156    1772 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0307 13:49:37.801469    1772 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace to be "Ready" ...
	I0307 13:49:37.807196    1772 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0307 13:49:37.807207    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0307 13:49:37.807217    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:37.818820    1772 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 13:49:37.818851    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 13:49:37.818859    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:37.868190    1772 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0307 13:49:37.874183    1772 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0307 13:49:37.874194    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0307 13:49:37.874204    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:37.943706    1772 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0307 13:49:37.943716    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0307 13:49:37.956336    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 13:49:37.962165    1772 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0307 13:49:37.965074    1772 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 13:49:37.965080    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0307 13:49:37.965090    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:37.971609    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 13:49:37.990147    1772 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0307 13:49:37.994203    1772 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0307 13:49:37.994214    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0307 13:49:37.994224    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:37.994468    1772 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0307 13:49:37.994473    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0307 13:49:37.996650    1772 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0307 13:49:37.996658    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0307 13:49:38.000809    1772 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0307 13:49:38.000817    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0307 13:49:38.025826    1772 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0307 13:49:38.025835    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0307 13:49:38.031777    1772 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0307 13:49:38.031787    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0307 13:49:38.055094    1772 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 13:49:38.055107    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0307 13:49:38.055207    1772 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0307 13:49:38.055216    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0307 13:49:38.076920    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0307 13:49:38.078283    1772 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0307 13:49:38.078289    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0307 13:49:38.099234    1772 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0307 13:49:38.099247    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0307 13:49:38.103733    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0307 13:49:38.116269    1772 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0307 13:49:38.114030    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 13:49:38.114120    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 13:49:38.123104    1772 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0307 13:49:38.119735    1772 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0307 13:49:38.123120    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0307 13:49:38.131100    1772 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0307 13:49:38.135096    1772 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0307 13:49:38.143047    1772 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0307 13:49:38.150989    1772 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0307 13:49:38.159092    1772 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0307 13:49:38.163104    1772 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0307 13:49:38.167198    1772 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0307 13:49:38.167210    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0307 13:49:38.167221    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:38.182496    1772 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0307 13:49:38.182506    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0307 13:49:38.190165    1772 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 13:49:38.190174    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0307 13:49:38.253518    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 13:49:38.280546    1772 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-263000" context rescaled to 1 replicas
	I0307 13:49:38.327497    1772 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0307 13:49:38.327507    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0307 13:49:38.386695    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0307 13:49:38.437897    1772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.2739775s)
	I0307 13:49:38.518376    1772 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0307 13:49:38.518388    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0307 13:49:38.562626    1772 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0307 13:49:38.562639    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0307 13:49:38.615105    1772 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0307 13:49:38.615116    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0307 13:49:38.690932    1772 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0307 13:49:38.690945    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0307 13:49:38.757629    1772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049029875s)
	I0307 13:49:38.758617    1772 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0307 13:49:38.758623    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0307 13:49:38.843513    1772 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0307 13:49:38.843525    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0307 13:49:38.904184    1772 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0307 13:49:38.904197    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0307 13:49:38.993842    1772 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0307 13:49:38.993852    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0307 13:49:39.106181    1772 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0307 13:49:39.106192    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0307 13:49:39.119439    1772 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 13:49:39.119451    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0307 13:49:39.126998    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 13:49:39.806826    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:49:39.893236    1772 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0307 13:49:39.899189    1772 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0307 13:49:39.899201    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0307 13:49:39.899212    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:40.041657    1772 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0307 13:49:40.041669    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0307 13:49:40.100285    1772 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0307 13:49:40.100297    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0307 13:49:40.158718    1772 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0307 13:49:40.158729    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0307 13:49:40.242814    1772 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0307 13:49:40.242826    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0307 13:49:40.283371    1772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.206475834s)
	I0307 13:49:40.283381    1772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.179686542s)
	I0307 13:49:40.283412    1772 addons.go:470] Verifying addon registry=true in "addons-263000"
	I0307 13:49:40.283419    1772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.167185875s)
	I0307 13:49:40.283424    1772 addons.go:470] Verifying addon metrics-server=true in "addons-263000"
	I0307 13:49:40.287219    1772 out.go:177] * Verifying registry addon...
	I0307 13:49:40.283465    1772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.164505333s)
	I0307 13:49:40.283511    1772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.03002525s)
	I0307 13:49:40.283523    1772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.896860125s)
	I0307 13:49:40.284651    1772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.313082958s)
	I0307 13:49:40.290577    1772 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0307 13:49:40.295365    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	W0307 13:49:40.295411    1772 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 13:49:40.295429    1772 retry.go:31] will retry after 309.817702ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 13:49:40.295432    1772 addons.go:470] Verifying addon ingress=true in "addons-263000"
	I0307 13:49:40.303257    1772 out.go:177] * Verifying ingress addon...
	I0307 13:49:40.295900    1772 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0307 13:49:40.306568    1772 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0307 13:49:40.310309    1772 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-263000 service yakd-dashboard -n yakd-dashboard
	
	I0307 13:49:40.315905    1772 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0307 13:49:40.315917    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:40.321162    1772 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0307 13:49:40.321170    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:40.332648    1772 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0307 13:49:40.332660    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0307 13:49:40.348876    1772 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 13:49:40.348885    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0307 13:49:40.370284    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 13:49:40.607378    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 13:49:40.818661    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:40.819560    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:41.121998    1772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.995020917s)
	I0307 13:49:41.122018    1772 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-263000"
	I0307 13:49:41.131211    1772 out.go:177] * Verifying csi-hostpath-driver addon...
	I0307 13:49:41.137668    1772 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0307 13:49:41.148768    1772 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0307 13:49:41.148776    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:41.312534    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:41.312957    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:41.642154    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:41.812578    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:41.813201    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:42.142331    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:42.308269    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:49:42.312780    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:42.315190    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:42.642798    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:42.812595    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:42.812781    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:43.142345    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:43.313580    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:43.313621    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:43.577170    1772 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0307 13:49:43.577186    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:43.608517    1772 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0307 13:49:43.617885    1772 addons.go:234] Setting addon gcp-auth=true in "addons-263000"
	I0307 13:49:43.617914    1772 host.go:66] Checking if "addons-263000" exists ...
	I0307 13:49:43.618747    1772 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0307 13:49:43.618755    1772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/addons-263000/id_rsa Username:docker}
	I0307 13:49:43.641288    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:43.652124    1772 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0307 13:49:43.656170    1772 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 13:49:43.660102    1772 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0307 13:49:43.660108    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0307 13:49:43.666624    1772 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0307 13:49:43.666631    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0307 13:49:43.673345    1772 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 13:49:43.673352    1772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0307 13:49:43.680750    1772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 13:49:43.837958    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:43.838068    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:44.142262    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:44.184595    1772 addons.go:470] Verifying addon gcp-auth=true in "addons-263000"
	I0307 13:49:44.188723    1772 out.go:177] * Verifying gcp-auth addon...
	I0307 13:49:44.196073    1772 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0307 13:49:44.206928    1772 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0307 13:49:44.206938    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:44.312606    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:44.312832    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:44.644418    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:44.700039    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:44.807951    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:49:44.812887    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:44.812889    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:45.142745    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:45.199790    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:45.313548    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:45.313780    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:45.643517    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:45.697971    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:45.813294    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:45.813421    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:46.142458    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:46.199404    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:46.312648    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:46.312661    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:46.640524    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:46.698510    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:46.812546    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:46.812668    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:47.142197    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:47.199421    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:47.307535    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:49:47.312537    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:47.313038    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:47.642542    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:47.700024    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:47.813169    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:47.813236    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:48.142447    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:48.199738    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:48.312626    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:48.313184    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:48.642435    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:48.699962    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:48.813339    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:48.813482    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:49.142530    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:49.199731    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:49.307954    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:49:49.312696    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:49.313328    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:49.643601    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:49.699625    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:49.812836    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:49.813120    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:50.142616    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:50.199973    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:50.312511    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:50.312561    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:50.642335    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:50.699346    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:50.813819    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:50.816055    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:51.144310    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:51.200914    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:51.313000    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:51.313336    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:51.642586    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:51.698870    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:51.807645    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:49:51.813633    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:51.813688    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:52.141868    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:52.200500    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:52.313334    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:52.314010    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:52.643052    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:52.700179    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:52.815489    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:52.815506    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:53.144042    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:53.198681    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:53.313519    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:53.313667    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:53.642423    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:53.699937    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:53.807908    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:49:53.812552    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:53.812821    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:54.142340    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:54.199864    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:54.312905    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:54.314025    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:54.642411    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:54.699802    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:54.812603    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:54.812678    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:55.142024    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:55.197868    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:55.312833    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:55.312838    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:55.642781    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:55.699465    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:55.812954    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:55.813109    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:56.142220    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:56.199251    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:56.307818    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:49:56.312615    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:56.312750    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:56.641389    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:56.698572    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:56.812913    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:56.813029    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:57.144512    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:57.199805    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:57.313162    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:57.313285    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:57.642404    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:57.699328    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:57.812215    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:57.812280    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:58.141875    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:58.199268    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:58.312297    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:58.312508    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:58.642075    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:58.699339    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:58.807494    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:49:58.812246    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:58.812558    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:59.142258    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:59.199196    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:59.312246    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:59.312356    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:49:59.642145    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:49:59.699330    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:49:59.812219    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:49:59.812578    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:00.142140    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:00.199466    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:00.312380    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:00.312380    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:00.642086    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:00.699270    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:00.812300    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:00.812516    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:01.142103    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:01.199034    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:01.307764    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:50:01.312047    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:01.312743    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:01.642527    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:01.697938    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:01.812212    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:01.812763    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:02.141768    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:02.197190    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:02.312193    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:02.312513    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:02.641942    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:02.699154    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:02.812668    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:02.812861    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:03.142415    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:03.199296    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:03.307754    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:50:03.312093    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:03.312880    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:03.642257    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:03.698407    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:03.812888    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:03.813033    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:04.142204    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:04.198980    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:04.312000    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:04.312625    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:04.640981    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:04.699085    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:04.812334    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:04.812338    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:05.142171    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:05.199294    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:05.312137    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:05.312155    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:05.641963    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:05.699054    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:05.807324    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:50:05.812119    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:05.812259    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:06.140721    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:06.199070    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:06.312847    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:06.313146    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:06.641910    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:06.697992    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:06.812912    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:06.813003    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:07.141710    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:07.199012    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:07.312920    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:07.313176    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:07.642590    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:07.700052    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:07.807641    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:50:07.812663    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:07.812790    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:08.142013    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:08.199440    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:08.312314    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:08.312483    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:08.641917    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:08.699251    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:08.812152    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:08.812225    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:09.142160    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:09.199073    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:09.311812    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:09.312291    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:09.642095    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:09.699045    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:09.811838    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:09.812426    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:10.141251    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:10.199528    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:10.307171    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:50:10.312275    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:10.312930    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:10.641606    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:10.698887    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:10.811927    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:10.811964    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:11.143815    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:11.199096    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:11.314159    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:11.314368    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:11.642276    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:11.697630    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:11.812191    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:11.813143    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:12.141630    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:12.197301    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:12.307384    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:50:12.311862    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:12.312902    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:12.641934    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:12.699448    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:12.812134    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:12.812167    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:13.141883    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:13.199544    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:13.311842    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:13.312039    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:13.641865    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:13.699504    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:13.812170    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:13.812639    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:14.142279    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:14.198967    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:14.308093    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:50:14.312019    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:14.312143    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:14.641919    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:14.699031    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:14.811648    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:14.812221    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:15.143518    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:15.199265    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:15.313979    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:15.314146    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:15.641879    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:15.698938    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:15.813242    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:15.813381    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:16.141809    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:16.199029    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:16.309335    1772 pod_ready.go:102] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"False"
	I0307 13:50:16.312021    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:16.312210    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:16.641606    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:16.698181    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:16.812909    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:16.813110    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:17.141704    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:17.198866    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:17.306978    1772 pod_ready.go:92] pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace has status "Ready":"True"
	I0307 13:50:17.306990    1772 pod_ready.go:81] duration metric: took 39.504705709s for pod "coredns-5dd5756b68-j8npl" in "kube-system" namespace to be "Ready" ...
	I0307 13:50:17.306995    1772 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w7qrt" in "kube-system" namespace to be "Ready" ...
	I0307 13:50:17.307873    1772 pod_ready.go:97] error getting pod "coredns-5dd5756b68-w7qrt" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w7qrt" not found
	I0307 13:50:17.307879    1772 pod_ready.go:81] duration metric: took 881.458µs for pod "coredns-5dd5756b68-w7qrt" in "kube-system" namespace to be "Ready" ...
	E0307 13:50:17.307883    1772 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-w7qrt" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w7qrt" not found
	I0307 13:50:17.307886    1772 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-263000" in "kube-system" namespace to be "Ready" ...
	I0307 13:50:17.310432    1772 pod_ready.go:92] pod "etcd-addons-263000" in "kube-system" namespace has status "Ready":"True"
	I0307 13:50:17.310440    1772 pod_ready.go:81] duration metric: took 2.552083ms for pod "etcd-addons-263000" in "kube-system" namespace to be "Ready" ...
	I0307 13:50:17.310445    1772 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-263000" in "kube-system" namespace to be "Ready" ...
	I0307 13:50:17.312837    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:17.313018    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:17.313855    1772 pod_ready.go:92] pod "kube-apiserver-addons-263000" in "kube-system" namespace has status "Ready":"True"
	I0307 13:50:17.313862    1772 pod_ready.go:81] duration metric: took 3.414417ms for pod "kube-apiserver-addons-263000" in "kube-system" namespace to be "Ready" ...
	I0307 13:50:17.313866    1772 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-263000" in "kube-system" namespace to be "Ready" ...
	I0307 13:50:17.316076    1772 pod_ready.go:92] pod "kube-controller-manager-addons-263000" in "kube-system" namespace has status "Ready":"True"
	I0307 13:50:17.316081    1772 pod_ready.go:81] duration metric: took 2.212208ms for pod "kube-controller-manager-addons-263000" in "kube-system" namespace to be "Ready" ...
	I0307 13:50:17.316086    1772 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9p6xr" in "kube-system" namespace to be "Ready" ...
	I0307 13:50:17.507976    1772 pod_ready.go:92] pod "kube-proxy-9p6xr" in "kube-system" namespace has status "Ready":"True"
	I0307 13:50:17.507985    1772 pod_ready.go:81] duration metric: took 191.899625ms for pod "kube-proxy-9p6xr" in "kube-system" namespace to be "Ready" ...
	I0307 13:50:17.507990    1772 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-263000" in "kube-system" namespace to be "Ready" ...
	I0307 13:50:17.641655    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:17.698560    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:17.814050    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:17.814130    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:17.907843    1772 pod_ready.go:92] pod "kube-scheduler-addons-263000" in "kube-system" namespace has status "Ready":"True"
	I0307 13:50:17.907849    1772 pod_ready.go:81] duration metric: took 399.864167ms for pod "kube-scheduler-addons-263000" in "kube-system" namespace to be "Ready" ...
	I0307 13:50:17.907852    1772 pod_ready.go:38] duration metric: took 40.117920583s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 13:50:17.907862    1772 api_server.go:52] waiting for apiserver process to appear ...
	I0307 13:50:17.907924    1772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 13:50:17.914755    1772 api_server.go:72] duration metric: took 40.962525666s to wait for apiserver process to appear ...
	I0307 13:50:17.914772    1772 api_server.go:88] waiting for apiserver healthz status ...
	I0307 13:50:17.914780    1772 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0307 13:50:17.918032    1772 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0307 13:50:17.918789    1772 api_server.go:141] control plane version: v1.28.4
	I0307 13:50:17.918797    1772 api_server.go:131] duration metric: took 4.0225ms to wait for apiserver health ...
	I0307 13:50:17.918800    1772 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 13:50:18.113392    1772 system_pods.go:59] 17 kube-system pods found
	I0307 13:50:18.113402    1772 system_pods.go:61] "coredns-5dd5756b68-j8npl" [37107ddd-61d6-437b-a460-6cf9f1956154] Running
	I0307 13:50:18.113406    1772 system_pods.go:61] "csi-hostpath-attacher-0" [8b16e86b-465e-4265-a784-fedb7b43a8ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0307 13:50:18.113409    1772 system_pods.go:61] "csi-hostpath-resizer-0" [b2b78271-c060-45f0-8d99-86d93c16eef1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0307 13:50:18.113412    1772 system_pods.go:61] "csi-hostpathplugin-snl6t" [77d28788-0230-4ee2-b860-cc6a2c27b3fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 13:50:18.113414    1772 system_pods.go:61] "etcd-addons-263000" [1f4daba8-05c0-4b26-9a93-eb1ad00b4998] Running
	I0307 13:50:18.113417    1772 system_pods.go:61] "kube-apiserver-addons-263000" [a4879d19-764e-46be-8660-751c0baffdf2] Running
	I0307 13:50:18.113419    1772 system_pods.go:61] "kube-controller-manager-addons-263000" [4a0a5f02-a514-4269-8f9e-f4db0899f008] Running
	I0307 13:50:18.113421    1772 system_pods.go:61] "kube-ingress-dns-minikube" [2b265f89-7503-4f45-b8e5-edaee7812238] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 13:50:18.113423    1772 system_pods.go:61] "kube-proxy-9p6xr" [ffe6c954-bd6f-464d-a2b0-f679e2a1b0ba] Running
	I0307 13:50:18.113425    1772 system_pods.go:61] "kube-scheduler-addons-263000" [3c222db7-df35-4000-b35e-ef7c75f5b1e3] Running
	I0307 13:50:18.113427    1772 system_pods.go:61] "metrics-server-69cf46c98-bq5l7" [3cbd03d0-e96e-4a05-b38e-f0a9aa3b5fb7] Running
	I0307 13:50:18.113428    1772 system_pods.go:61] "nvidia-device-plugin-daemonset-lkgxm" [f2dc7d70-d24c-4ff8-b079-b1dd168beae6] Running
	I0307 13:50:18.113431    1772 system_pods.go:61] "registry-proxy-rcw2g" [631fd6f8-a701-47ff-ba2e-767f14a10490] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0307 13:50:18.113433    1772 system_pods.go:61] "registry-zp2kw" [23c4da1e-1600-4bf6-bd46-b55a29d69d26] Running
	I0307 13:50:18.113436    1772 system_pods.go:61] "snapshot-controller-58dbcc7b99-629zs" [7d07b993-0203-4c69-be76-9b1a145a5731] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 13:50:18.113439    1772 system_pods.go:61] "snapshot-controller-58dbcc7b99-rzhl4" [6e97847d-f38a-4e7c-bf2e-b8c0cf07d5cd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 13:50:18.113441    1772 system_pods.go:61] "storage-provisioner" [e1a380a1-1895-4a32-b3c4-f2c8d3dc2a88] Running
	I0307 13:50:18.113444    1772 system_pods.go:74] duration metric: took 194.644625ms to wait for pod list to return data ...
	I0307 13:50:18.113448    1772 default_sa.go:34] waiting for default service account to be created ...
	I0307 13:50:18.141342    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:18.197535    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:18.305819    1772 default_sa.go:45] found service account: "default"
	I0307 13:50:18.305830    1772 default_sa.go:55] duration metric: took 192.383ms for default service account to be created ...
	I0307 13:50:18.305834    1772 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 13:50:18.312624    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:18.313377    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:18.515954    1772 system_pods.go:86] 17 kube-system pods found
	I0307 13:50:18.515965    1772 system_pods.go:89] "coredns-5dd5756b68-j8npl" [37107ddd-61d6-437b-a460-6cf9f1956154] Running
	I0307 13:50:18.515970    1772 system_pods.go:89] "csi-hostpath-attacher-0" [8b16e86b-465e-4265-a784-fedb7b43a8ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0307 13:50:18.515973    1772 system_pods.go:89] "csi-hostpath-resizer-0" [b2b78271-c060-45f0-8d99-86d93c16eef1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0307 13:50:18.515976    1772 system_pods.go:89] "csi-hostpathplugin-snl6t" [77d28788-0230-4ee2-b860-cc6a2c27b3fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 13:50:18.515979    1772 system_pods.go:89] "etcd-addons-263000" [1f4daba8-05c0-4b26-9a93-eb1ad00b4998] Running
	I0307 13:50:18.515982    1772 system_pods.go:89] "kube-apiserver-addons-263000" [a4879d19-764e-46be-8660-751c0baffdf2] Running
	I0307 13:50:18.515984    1772 system_pods.go:89] "kube-controller-manager-addons-263000" [4a0a5f02-a514-4269-8f9e-f4db0899f008] Running
	I0307 13:50:18.515986    1772 system_pods.go:89] "kube-ingress-dns-minikube" [2b265f89-7503-4f45-b8e5-edaee7812238] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 13:50:18.515988    1772 system_pods.go:89] "kube-proxy-9p6xr" [ffe6c954-bd6f-464d-a2b0-f679e2a1b0ba] Running
	I0307 13:50:18.515990    1772 system_pods.go:89] "kube-scheduler-addons-263000" [3c222db7-df35-4000-b35e-ef7c75f5b1e3] Running
	I0307 13:50:18.515991    1772 system_pods.go:89] "metrics-server-69cf46c98-bq5l7" [3cbd03d0-e96e-4a05-b38e-f0a9aa3b5fb7] Running
	I0307 13:50:18.515993    1772 system_pods.go:89] "nvidia-device-plugin-daemonset-lkgxm" [f2dc7d70-d24c-4ff8-b079-b1dd168beae6] Running
	I0307 13:50:18.515995    1772 system_pods.go:89] "registry-proxy-rcw2g" [631fd6f8-a701-47ff-ba2e-767f14a10490] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0307 13:50:18.516008    1772 system_pods.go:89] "registry-zp2kw" [23c4da1e-1600-4bf6-bd46-b55a29d69d26] Running
	I0307 13:50:18.516012    1772 system_pods.go:89] "snapshot-controller-58dbcc7b99-629zs" [7d07b993-0203-4c69-be76-9b1a145a5731] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 13:50:18.516016    1772 system_pods.go:89] "snapshot-controller-58dbcc7b99-rzhl4" [6e97847d-f38a-4e7c-bf2e-b8c0cf07d5cd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 13:50:18.516018    1772 system_pods.go:89] "storage-provisioner" [e1a380a1-1895-4a32-b3c4-f2c8d3dc2a88] Running
	I0307 13:50:18.516022    1772 system_pods.go:126] duration metric: took 210.190041ms to wait for k8s-apps to be running ...
	I0307 13:50:18.516026    1772 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 13:50:18.516093    1772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 13:50:18.522749    1772 system_svc.go:56] duration metric: took 6.721917ms WaitForService to wait for kubelet
	I0307 13:50:18.522756    1772 kubeadm.go:576] duration metric: took 41.570541291s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 13:50:18.522765    1772 node_conditions.go:102] verifying NodePressure condition ...
	I0307 13:50:18.641817    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:18.698877    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:18.707058    1772 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 13:50:18.707066    1772 node_conditions.go:123] node cpu capacity is 2
	I0307 13:50:18.707083    1772 node_conditions.go:105] duration metric: took 184.318083ms to run NodePressure ...
	I0307 13:50:18.707092    1772 start.go:240] waiting for startup goroutines ...
	I0307 13:50:18.813313    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:18.813417    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:19.141828    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:19.198880    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:19.313476    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:19.313548    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:19.640890    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:19.698825    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:19.813786    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:19.813941    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:20.140336    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:20.198993    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:20.313262    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:20.313973    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:20.641520    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:20.699048    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:20.813625    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:20.813743    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:21.141660    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:21.199043    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:21.314489    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:21.314576    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:21.641907    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:21.698138    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:21.815758    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:21.815914    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:22.141546    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:22.199049    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:22.314083    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:22.314264    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:22.641409    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:22.699049    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:22.814348    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:22.814548    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:23.141541    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:23.199262    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:23.313558    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:23.313613    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:23.641730    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:23.698968    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:23.814494    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:23.814499    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:24.210231    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:24.211383    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:24.313884    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 13:50:24.314028    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:24.642473    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:24.696800    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:24.813450    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:24.813892    1772 kapi.go:107] duration metric: took 44.518987333s to wait for kubernetes.io/minikube-addons=registry ...
	I0307 13:50:25.141451    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:25.198461    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:25.313569    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:25.641831    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:25.699031    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:25.813581    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:26.141454    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:26.199084    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:26.313817    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:26.641816    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:26.697703    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:26.813717    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:27.141397    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:27.198855    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:27.394017    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:27.641711    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:27.698899    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:27.813517    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:28.140820    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:28.198482    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:28.313336    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:28.641760    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:28.698509    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:28.813139    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:29.141546    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:29.198543    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:29.313284    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:29.641860    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:29.698521    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:29.813261    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:30.141046    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:30.198507    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:30.313128    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:30.641829    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:30.698575    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:30.813334    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:31.141389    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:31.199011    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:31.313442    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:31.641325    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:31.697220    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:31.813021    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:32.141691    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:32.198789    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:32.312810    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:32.641191    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:32.698665    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:32.813069    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:33.141176    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:33.198297    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:33.312422    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:33.641665    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:33.698464    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:33.812974    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:34.142169    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:34.197715    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:34.313075    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:34.643261    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:34.698643    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:34.813216    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:35.141505    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:35.198273    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:35.313066    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:35.640636    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:35.698598    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:35.812482    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:36.141781    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:36.198460    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:36.313025    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:36.641145    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:36.697622    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:36.813240    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:37.141089    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:37.198309    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:37.313206    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:37.641387    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:37.698917    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:37.813613    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:38.141325    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:38.199397    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:38.311949    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:38.641292    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:38.698487    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:38.813044    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:39.141622    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:39.198552    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:39.313299    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:39.641167    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:39.698200    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:39.812917    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:40.141287    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:40.198260    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:40.312990    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:40.640606    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:40.698366    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:40.812915    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:41.141020    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:41.198562    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:41.313255    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:41.641007    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:41.697486    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:41.812794    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:42.141012    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:42.198718    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:42.313367    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:42.641665    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:42.698583    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:42.812963    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:43.141479    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:43.198327    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:43.313202    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:43.640293    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:43.698385    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:43.812894    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:44.140662    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:44.198355    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:44.313670    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:44.641237    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:44.698187    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:44.812982    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:45.140085    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:45.198246    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:45.313158    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:45.641200    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:45.698396    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:45.813008    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:46.141209    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:46.198442    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:46.310851    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:46.641664    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:46.696865    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:46.813291    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:47.141206    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:47.198713    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:47.311352    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:47.640902    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:47.698059    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:47.812986    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:48.145629    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:48.198354    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:48.313010    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:48.641127    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:48.698358    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:48.812886    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:49.139233    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:49.198253    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:49.313371    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:49.641124    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:49.698327    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:49.812719    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:50.139541    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:50.198261    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:50.313184    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:50.641732    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:50.697993    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:50.812924    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:51.141212    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:51.198524    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:51.312606    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:51.640342    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:51.697408    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:51.813060    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:52.140573    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:52.198476    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:52.313170    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:52.642538    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:52.698282    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:52.813086    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:53.141014    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:53.198324    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:53.313131    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:53.640955    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:53.698254    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:53.812984    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:54.141279    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:54.198532    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:54.313198    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:54.640905    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:54.699822    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:54.813148    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:55.141015    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:55.198182    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:55.312712    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:55.640671    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:55.698073    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:55.812883    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:56.141329    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:56.200874    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:56.312702    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:56.640787    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:56.696937    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:56.812505    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:57.141297    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:57.197737    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:57.312938    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:57.640857    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:57.697945    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:57.812938    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:58.140581    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:58.197850    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:58.312796    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:58.641290    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:58.698060    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:58.812864    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:59.140795    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:59.198177    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:59.313955    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:50:59.641301    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:50:59.698944    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:50:59.812876    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:00.140693    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:00.197874    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:00.312594    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:00.641328    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:00.698266    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:00.812789    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:01.141191    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:01.198252    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:01.312546    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:01.640865    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:01.697149    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:01.812634    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:02.142211    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:02.198013    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:02.312583    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:02.640684    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:02.697953    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:02.813145    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:03.140478    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:03.198306    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:03.312578    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:03.641511    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:03.698018    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:03.812585    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:04.141278    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:04.198156    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:04.312654    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:04.640453    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:04.697921    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:04.812630    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:05.140283    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:05.198019    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:05.313136    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:05.640650    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:05.698258    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:05.813268    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:06.140558    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:06.198157    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:06.312713    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:06.640652    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:06.697279    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:06.812764    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:07.141040    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:07.327312    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:07.327551    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:07.641887    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:07.698250    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:07.812468    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:08.140741    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:08.198097    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:08.312753    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:08.640616    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:08.697964    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:08.814076    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:09.140546    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:09.197837    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:09.312393    1772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 13:51:09.640522    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:09.697700    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:09.810938    1772 kapi.go:107] duration metric: took 1m29.506368958s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0307 13:51:10.140254    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:10.198331    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:10.640887    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:10.697530    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:11.141025    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:11.198004    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:11.640680    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:11.696585    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:12.140701    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:12.196769    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:12.640223    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:12.698580    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:13.140593    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:13.196952    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:13.639225    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:13.697552    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:14.141751    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:14.198031    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:14.638965    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:14.697507    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:15.140866    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:15.197631    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:15.642067    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:15.697519    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:16.140659    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:16.197683    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:16.640429    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:16.696731    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:17.140308    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:17.197624    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:17.640234    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:17.697718    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:18.139998    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:18.196570    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:18.639985    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:18.697757    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:19.140678    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:19.197371    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:19.640422    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:19.697447    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:20.140434    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:20.197436    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:20.640200    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:20.697659    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:21.140347    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:21.197393    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:21.640390    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:21.696115    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:22.140063    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:22.197475    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:22.640318    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:22.697321    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:23.140622    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:23.195506    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:23.640403    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:23.697268    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:24.140335    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:24.197459    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:24.639985    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:24.697352    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:25.140244    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:25.197348    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:25.640335    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:25.697593    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:26.140275    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:26.199076    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:26.639854    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:26.696316    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:27.140250    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:27.198093    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:27.640071    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:27.697601    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:28.140547    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:28.197299    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:28.640753    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:28.697171    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:29.140389    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:29.197351    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:29.639998    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:29.697315    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:30.140202    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:30.197370    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:30.640192    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 13:51:30.697615    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:31.140138    1772 kapi.go:107] duration metric: took 1m50.004925s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0307 13:51:31.197313    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:31.696650    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:32.197410    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:32.697692    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:33.197554    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:33.697477    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:34.197631    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:34.697758    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:35.197500    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:35.697547    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:36.197587    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:36.696416    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:37.198123    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:37.697180    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:38.196931    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:38.697359    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:39.197553    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:39.697244    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:40.197059    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:40.697216    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:41.197487    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:41.696133    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:42.197231    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:42.697069    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:43.197391    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:43.697047    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:44.197377    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:44.697841    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:45.197479    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:45.697283    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:46.197418    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:46.695800    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:47.197367    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:47.696986    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:48.197216    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:48.697317    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:49.196438    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:49.697435    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:50.197133    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:50.696095    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:51.197381    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:51.695519    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:52.197073    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:52.696793    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:53.197041    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:53.697053    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:54.197225    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:54.696937    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:55.196379    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:55.696950    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:56.196760    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:56.695519    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:57.197145    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:57.697267    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:58.197086    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:58.697047    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:59.197261    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:51:59.696913    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:00.198027    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:00.696933    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:01.196982    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:01.695873    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:02.196924    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:02.696571    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:03.196877    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:03.697028    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:04.196702    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:04.696952    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:05.196734    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:05.696877    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:06.196952    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:06.695248    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:07.196720    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:07.696577    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:08.196774    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:08.697069    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:09.196795    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:09.696700    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:10.196761    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:10.696766    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:11.196482    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:11.695181    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:12.196812    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:12.696712    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:13.196894    1772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 13:52:13.696418    1772 kapi.go:107] duration metric: took 2m29.503683666s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0307 13:52:13.700309    1772 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-263000 cluster.
	I0307 13:52:13.703514    1772 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0307 13:52:13.707549    1772 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0307 13:52:13.711554    1772 out.go:177] * Enabled addons: ingress-dns, storage-provisioner-rancher, storage-provisioner, default-storageclass, cloud-spanner, metrics-server, nvidia-device-plugin, yakd, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0307 13:52:13.715490    1772 addons.go:505] duration metric: took 2m36.765832s for enable addons: enabled=[ingress-dns storage-provisioner-rancher storage-provisioner default-storageclass cloud-spanner metrics-server nvidia-device-plugin yakd inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0307 13:52:13.715504    1772 start.go:245] waiting for cluster config update ...
	I0307 13:52:13.715524    1772 start.go:254] writing updated cluster config ...
	I0307 13:52:13.715903    1772 ssh_runner.go:195] Run: rm -f paused
	I0307 13:52:13.865016    1772 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0307 13:52:13.866718    1772 out.go:177] * Done! kubectl is now configured to use "addons-263000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 07 21:53:45 addons-263000 dockerd[1106]: time="2024-03-07T21:53:45.509516251Z" level=info msg="ignoring event" container=bad4caf1a1284e8d7ae753d52f99f3fdbce168352d03b484a7d82152191b81a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 21:53:45 addons-263000 dockerd[1112]: time="2024-03-07T21:53:45.509613400Z" level=info msg="shim disconnected" id=bad4caf1a1284e8d7ae753d52f99f3fdbce168352d03b484a7d82152191b81a3 namespace=moby
	Mar 07 21:53:45 addons-263000 dockerd[1112]: time="2024-03-07T21:53:45.509653868Z" level=warning msg="cleaning up after shim disconnected" id=bad4caf1a1284e8d7ae753d52f99f3fdbce168352d03b484a7d82152191b81a3 namespace=moby
	Mar 07 21:53:45 addons-263000 dockerd[1112]: time="2024-03-07T21:53:45.509658161Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 21:53:46 addons-263000 dockerd[1112]: time="2024-03-07T21:53:46.314312360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 21:53:46 addons-263000 dockerd[1112]: time="2024-03-07T21:53:46.314343326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 21:53:46 addons-263000 dockerd[1112]: time="2024-03-07T21:53:46.314363123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 21:53:46 addons-263000 dockerd[1112]: time="2024-03-07T21:53:46.314389962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 21:53:46 addons-263000 dockerd[1106]: time="2024-03-07T21:53:46.340590570Z" level=info msg="ignoring event" container=2cf20bdb1244ab4fec99a3d7045d6d2f55635c9ff7ff058f9c4e1355ec520b92 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 21:53:46 addons-263000 dockerd[1112]: time="2024-03-07T21:53:46.340716809Z" level=info msg="shim disconnected" id=2cf20bdb1244ab4fec99a3d7045d6d2f55635c9ff7ff058f9c4e1355ec520b92 namespace=moby
	Mar 07 21:53:46 addons-263000 dockerd[1112]: time="2024-03-07T21:53:46.340749775Z" level=warning msg="cleaning up after shim disconnected" id=2cf20bdb1244ab4fec99a3d7045d6d2f55635c9ff7ff058f9c4e1355ec520b92 namespace=moby
	Mar 07 21:53:46 addons-263000 dockerd[1112]: time="2024-03-07T21:53:46.340754026Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 21:53:48 addons-263000 dockerd[1106]: time="2024-03-07T21:53:48.769840295Z" level=info msg="ignoring event" container=1fc08b479096239ce3bf7f45dee743c13672f31cc2ccfad6073bf57d57c71179 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 21:53:48 addons-263000 dockerd[1112]: time="2024-03-07T21:53:48.769938111Z" level=info msg="shim disconnected" id=1fc08b479096239ce3bf7f45dee743c13672f31cc2ccfad6073bf57d57c71179 namespace=moby
	Mar 07 21:53:48 addons-263000 dockerd[1112]: time="2024-03-07T21:53:48.769969285Z" level=warning msg="cleaning up after shim disconnected" id=1fc08b479096239ce3bf7f45dee743c13672f31cc2ccfad6073bf57d57c71179 namespace=moby
	Mar 07 21:53:48 addons-263000 dockerd[1112]: time="2024-03-07T21:53:48.769973161Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 21:53:52 addons-263000 dockerd[1106]: time="2024-03-07T21:53:52.549940342Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=e498100ade0bcb9163922886c2a68e33e8da7627588fdeb9fb0667dd2055d881
	Mar 07 21:53:52 addons-263000 dockerd[1112]: time="2024-03-07T21:53:52.590504095Z" level=info msg="shim disconnected" id=e498100ade0bcb9163922886c2a68e33e8da7627588fdeb9fb0667dd2055d881 namespace=moby
	Mar 07 21:53:52 addons-263000 dockerd[1106]: time="2024-03-07T21:53:52.590696100Z" level=info msg="ignoring event" container=e498100ade0bcb9163922886c2a68e33e8da7627588fdeb9fb0667dd2055d881 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 21:53:52 addons-263000 dockerd[1112]: time="2024-03-07T21:53:52.590758156Z" level=warning msg="cleaning up after shim disconnected" id=e498100ade0bcb9163922886c2a68e33e8da7627588fdeb9fb0667dd2055d881 namespace=moby
	Mar 07 21:53:52 addons-263000 dockerd[1112]: time="2024-03-07T21:53:52.590769617Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 21:53:52 addons-263000 dockerd[1112]: time="2024-03-07T21:53:52.674621986Z" level=info msg="shim disconnected" id=b24690213955db316e851f961e70b9e36f39c3ced9fc2859d60d9a4f5ac6fee1 namespace=moby
	Mar 07 21:53:52 addons-263000 dockerd[1112]: time="2024-03-07T21:53:52.674651327Z" level=warning msg="cleaning up after shim disconnected" id=b24690213955db316e851f961e70b9e36f39c3ced9fc2859d60d9a4f5ac6fee1 namespace=moby
	Mar 07 21:53:52 addons-263000 dockerd[1112]: time="2024-03-07T21:53:52.674655786Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 21:53:52 addons-263000 dockerd[1106]: time="2024-03-07T21:53:52.674742974Z" level=info msg="ignoring event" container=b24690213955db316e851f961e70b9e36f39c3ced9fc2859d60d9a4f5ac6fee1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2cf20bdb1244a       dd1b12fcb6097                                                                                                                10 seconds ago       Exited              hello-world-app           1                   2c525b2638653       hello-world-app-5d77478584-4zz98
	37688c6408255       nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                                                30 seconds ago       Running             nginx                     0                   0d825bf869743       nginx
	5c1f143322ea5       ghcr.io/headlamp-k8s/headlamp@sha256:94e00732e1b43057a9135dafc7483781aea4a73a26cec449ed19f4d8794308d5                        48 seconds ago       Running             headlamp                  0                   808949d779918       headlamp-5485c556b-v68g9
	5b4739a5a4674       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                 About a minute ago   Running             gcp-auth                  0                   3bae46834bde9       gcp-auth-5f6b4f85fd-vfhrm
	d815d0b9caebc       1a024e390dd05                                                                                                                3 minutes ago        Exited              patch                     1                   69ea3c9aec884       ingress-nginx-admission-patch-ww9gk
	1446a72351f33       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334   3 minutes ago        Exited              create                    0                   1935d09651253       ingress-nginx-admission-create-rzbv2
	8fcbeb5b7275a       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        3 minutes ago        Running             yakd                      0                   b1764ef93bd63       yakd-dashboard-9947fc6bf-f6gpj
	6715bc4652b84       ba04bb24b9575                                                                                                                4 minutes ago        Running             storage-provisioner       0                   35af17894dade       storage-provisioner
	9a0dc72a6e29c       97e04611ad434                                                                                                                4 minutes ago        Running             coredns                   0                   31aa5a064bb41       coredns-5dd5756b68-j8npl
	014d925edadde       3ca3ca488cf13                                                                                                                4 minutes ago        Running             kube-proxy                0                   70196dfffd158       kube-proxy-9p6xr
	f7b008820594c       04b4c447bb9d4                                                                                                                4 minutes ago        Running             kube-apiserver            0                   21b21c6f9db0c       kube-apiserver-addons-263000
	6025cd5ea8e30       05c284c929889                                                                                                                4 minutes ago        Running             kube-scheduler            0                   ad6bc0e8ffbc0       kube-scheduler-addons-263000
	269e89325c468       9cdd6470f48c8                                                                                                                4 minutes ago        Running             etcd                      0                   a53903a91e7bb       etcd-addons-263000
	932e242a08ce3       9961cbceaf234                                                                                                                4 minutes ago        Running             kube-controller-manager   0                   91453d4b9b4f8       kube-controller-manager-addons-263000
	
	
	==> coredns [9a0dc72a6e29] <==
	[INFO] 10.244.0.20:60912 - 54993 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030758s
	[INFO] 10.244.0.20:60912 - 2759 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042302s
	[INFO] 10.244.0.20:60912 - 22341 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036426s
	[INFO] 10.244.0.20:60912 - 24183 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000084896s
	[INFO] 10.244.0.20:48193 - 48289 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000073976s
	[INFO] 10.244.0.20:48193 - 7677 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048179s
	[INFO] 10.244.0.20:48193 - 62701 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000020005s
	[INFO] 10.244.0.20:48193 - 15502 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013045s
	[INFO] 10.244.0.20:48193 - 11222 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000024589s
	[INFO] 10.244.0.20:48193 - 40516 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000016963s
	[INFO] 10.244.0.20:48193 - 8896 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005318s
	[INFO] 10.244.0.20:38687 - 31591 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.002772354s
	[INFO] 10.244.0.20:38687 - 29022 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000034092s
	[INFO] 10.244.0.20:38687 - 6027 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038802s
	[INFO] 10.244.0.20:44948 - 58337 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015754s
	[INFO] 10.244.0.20:38687 - 48726 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000333s
	[INFO] 10.244.0.20:38687 - 52873 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012211s
	[INFO] 10.244.0.20:44948 - 13061 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000020463s
	[INFO] 10.244.0.20:44948 - 5524 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00001067s
	[INFO] 10.244.0.20:38687 - 41367 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000328s
	[INFO] 10.244.0.20:44948 - 41597 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000024632s
	[INFO] 10.244.0.20:38687 - 55351 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000012628s
	[INFO] 10.244.0.20:44948 - 43468 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010753s
	[INFO] 10.244.0.20:44948 - 36478 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010086s
	[INFO] 10.244.0.20:44948 - 48154 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000009836s
	
	
	==> describe nodes <==
	Name:               addons-263000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-263000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e3656b8cff33aafa60dd2a07a4b34bce666a6a6
	                    minikube.k8s.io/name=addons-263000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T13_49_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-263000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 21:49:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-263000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 21:53:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 21:53:28 +0000   Thu, 07 Mar 2024 21:49:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 21:53:28 +0000   Thu, 07 Mar 2024 21:49:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 21:53:28 +0000   Thu, 07 Mar 2024 21:49:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 21:53:28 +0000   Thu, 07 Mar 2024 21:49:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-263000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 f16d43b128924b46895223ee9f0beb8e
	  System UUID:                f16d43b128924b46895223ee9f0beb8e
	  Boot ID:                    2fb46d85-0753-40f6-8d1f-f2faa99712ce
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-4zz98         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  gcp-auth                    gcp-auth-5f6b4f85fd-vfhrm                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	  headlamp                    headlamp-5485c556b-v68g9                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 coredns-5dd5756b68-j8npl                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m20s
	  kube-system                 etcd-addons-263000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-apiserver-addons-263000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-controller-manager-addons-263000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-proxy-9p6xr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-scheduler-addons-263000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-f6gpj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m19s                  kube-proxy       
	  Normal  Starting                 4m38s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m37s (x8 over 4m38s)  kubelet          Node addons-263000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m37s (x8 over 4m38s)  kubelet          Node addons-263000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m37s (x7 over 4m38s)  kubelet          Node addons-263000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m33s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m33s                  kubelet          Node addons-263000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m33s                  kubelet          Node addons-263000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m33s                  kubelet          Node addons-263000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m29s                  kubelet          Node addons-263000 status is now: NodeReady
	  Normal  RegisteredNode           4m21s                  node-controller  Node addons-263000 event: Registered Node addons-263000 in Controller
	
	
	==> dmesg <==
	[  +5.599514] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.241977] kauditd_printk_skb: 7 callbacks suppressed
	[Mar 7 21:50] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.075072] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.154597] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.261797] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.529116] kauditd_printk_skb: 25 callbacks suppressed
	[ +10.213718] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.192905] kauditd_printk_skb: 19 callbacks suppressed
	[Mar 7 21:51] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.372526] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.042859] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.683679] kauditd_printk_skb: 5 callbacks suppressed
	[Mar 7 21:52] kauditd_printk_skb: 7 callbacks suppressed
	[ +17.009562] kauditd_printk_skb: 9 callbacks suppressed
	[ +10.513593] kauditd_printk_skb: 15 callbacks suppressed
	[ +12.079361] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.488254] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.068701] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.714769] kauditd_printk_skb: 6 callbacks suppressed
	[Mar 7 21:53] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.177949] kauditd_printk_skb: 12 callbacks suppressed
	[ +16.879872] kauditd_printk_skb: 22 callbacks suppressed
	[ +19.769926] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.003269] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [269e89325c46] <==
	{"level":"info","ts":"2024-03-07T21:49:19.711664Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-07T21:49:19.711819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 switched to configuration voters=(14154013790752671120)"}
	{"level":"info","ts":"2024-03-07T21:49:19.712008Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2024-03-07T21:49:20.500491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-07T21:49:20.500548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-07T21:49:20.500567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-03-07T21:49:20.500578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-03-07T21:49:20.500587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-03-07T21:49:20.500594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-03-07T21:49:20.500601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-03-07T21:49:20.501509Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-263000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-07T21:49:20.501527Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:49:20.501627Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T21:49:20.501923Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:49:20.501956Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:49:20.501966Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:49:20.501993Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-07T21:49:20.502Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-07T21:49:20.502004Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T21:49:20.502451Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-03-07T21:49:20.502571Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-07T21:50:27.547507Z","caller":"traceutil/trace.go:171","msg":"trace[1724346099] transaction","detail":"{read_only:false; response_revision:979; number_of_response:1; }","duration":"100.246349ms","start":"2024-03-07T21:50:27.447252Z","end":"2024-03-07T21:50:27.547498Z","steps":["trace[1724346099] 'process raft request'  (duration: 100.019912ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-07T21:51:07.487786Z","caller":"traceutil/trace.go:171","msg":"trace[579090319] linearizableReadLoop","detail":"{readStateIndex:1165; appliedIndex:1164; }","duration":"128.734475ms","start":"2024-03-07T21:51:07.359044Z","end":"2024-03-07T21:51:07.487778Z","steps":["trace[579090319] 'read index received'  (duration: 128.68821ms)","trace[579090319] 'applied index is now lower than readState.Index'  (duration: 45.932µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-07T21:51:07.487874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.8343ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10918"}
	{"level":"info","ts":"2024-03-07T21:51:07.487887Z","caller":"traceutil/trace.go:171","msg":"trace[979306414] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1131; }","duration":"128.856598ms","start":"2024-03-07T21:51:07.359028Z","end":"2024-03-07T21:51:07.487884Z","steps":["trace[979306414] 'agreement among raft nodes before linearized reading'  (duration: 128.795537ms)"],"step_count":1}
	
	
	==> gcp-auth [5b4739a5a467] <==
	2024/03/07 21:52:13 GCP Auth Webhook started!
	2024/03/07 21:52:19 Ready to marshal response ...
	2024/03/07 21:52:19 Ready to write response ...
	2024/03/07 21:52:24 Ready to marshal response ...
	2024/03/07 21:52:24 Ready to write response ...
	2024/03/07 21:52:42 Ready to marshal response ...
	2024/03/07 21:52:42 Ready to write response ...
	2024/03/07 21:52:42 Ready to marshal response ...
	2024/03/07 21:52:42 Ready to write response ...
	2024/03/07 21:52:42 Ready to marshal response ...
	2024/03/07 21:52:42 Ready to write response ...
	2024/03/07 21:52:51 Ready to marshal response ...
	2024/03/07 21:52:51 Ready to write response ...
	2024/03/07 21:53:04 Ready to marshal response ...
	2024/03/07 21:53:04 Ready to write response ...
	2024/03/07 21:53:04 Ready to marshal response ...
	2024/03/07 21:53:04 Ready to write response ...
	2024/03/07 21:53:04 Ready to marshal response ...
	2024/03/07 21:53:04 Ready to write response ...
	2024/03/07 21:53:23 Ready to marshal response ...
	2024/03/07 21:53:23 Ready to write response ...
	2024/03/07 21:53:33 Ready to marshal response ...
	2024/03/07 21:53:33 Ready to write response ...
	
	
	==> kernel <==
	 21:53:57 up 4 min,  0 users,  load average: 0.59, 0.66, 0.34
	Linux addons-263000 5.10.207 #1 SMP PREEMPT Thu Feb 22 23:40:42 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f7b008820594] <==
	I0307 21:52:58.379071       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 21:52:58.379089       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 21:52:58.387129       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 21:52:58.387150       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 21:52:58.395107       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 21:52:58.395121       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 21:52:58.399130       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 21:52:58.399150       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 21:52:58.402525       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 21:52:58.402544       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 21:52:58.409889       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 21:52:58.409964       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 21:52:58.412559       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 21:52:58.412579       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0307 21:52:59.395391       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0307 21:52:59.412799       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0307 21:52:59.419514       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0307 21:53:04.167128       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.2.99"}
	E0307 21:53:07.602169       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0307 21:53:23.236256       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0307 21:53:23.328517       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.203.116"}
	I0307 21:53:25.646314       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0307 21:53:25.652437       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0307 21:53:26.658818       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0307 21:53:33.537927       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.208.183"}
	
	
	==> kube-controller-manager [932e242a08ce] <==
	W0307 21:53:33.227475       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 21:53:33.227491       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 21:53:33.467797       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0307 21:53:33.473542       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-4zz98"
	I0307 21:53:33.479689       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.162473ms"
	I0307 21:53:33.486012       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="6.298811ms"
	I0307 21:53:33.493589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.547746ms"
	I0307 21:53:33.493694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="83.271µs"
	I0307 21:53:35.679535       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0307 21:53:36.151175       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0307 21:53:36.151239       1 shared_informer.go:318] Caches are synced for resource quota
	I0307 21:53:36.568576       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0307 21:53:36.568602       1 shared_informer.go:318] Caches are synced for garbage collector
	W0307 21:53:37.451976       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 21:53:37.452001       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 21:53:40.552059       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 21:53:40.552084       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 21:53:43.143135       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 21:53:43.143158       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 21:53:46.291626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="26.84µs"
	I0307 21:53:47.308128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="23.755µs"
	I0307 21:53:48.309125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.546µs"
	I0307 21:53:49.532352       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0307 21:53:49.533588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="1.792µs"
	I0307 21:53:49.535902       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	
	
	==> kube-proxy [014d925edadd] <==
	I0307 21:49:37.857215       1 server_others.go:69] "Using iptables proxy"
	I0307 21:49:37.872819       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0307 21:49:37.935169       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0307 21:49:37.935189       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0307 21:49:37.935974       1 server_others.go:152] "Using iptables Proxier"
	I0307 21:49:37.936000       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 21:49:37.936077       1 server.go:846] "Version info" version="v1.28.4"
	I0307 21:49:37.936083       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 21:49:37.936730       1 config.go:188] "Starting service config controller"
	I0307 21:49:37.936734       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 21:49:37.936742       1 config.go:97] "Starting endpoint slice config controller"
	I0307 21:49:37.936743       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 21:49:37.936909       1 config.go:315] "Starting node config controller"
	I0307 21:49:37.936911       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 21:49:38.037089       1 shared_informer.go:318] Caches are synced for node config
	I0307 21:49:38.037108       1 shared_informer.go:318] Caches are synced for service config
	I0307 21:49:38.037120       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6025cd5ea8e3] <==
	W0307 21:49:21.107917       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0307 21:49:21.107920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0307 21:49:21.107931       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0307 21:49:21.107934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0307 21:49:21.107969       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 21:49:21.107976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 21:49:21.107998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 21:49:21.108004       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0307 21:49:21.108369       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 21:49:21.108405       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0307 21:49:21.108534       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0307 21:49:21.108545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0307 21:49:21.109109       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0307 21:49:21.109144       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 21:49:22.014433       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 21:49:22.014449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0307 21:49:22.058205       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 21:49:22.058273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0307 21:49:22.077237       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 21:49:22.077256       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0307 21:49:22.147745       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0307 21:49:22.147889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0307 21:49:22.151423       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0307 21:49:22.151435       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0307 21:49:22.502873       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 07 21:53:39 addons-263000 kubelet[2390]: E0307 21:53:39.565300    2390 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(2b265f89-7503-4f45-b8e5-edaee7812238)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="2b265f89-7503-4f45-b8e5-edaee7812238"
	Mar 07 21:53:46 addons-263000 kubelet[2390]: I0307 21:53:46.284678    2390 scope.go:117] "RemoveContainer" containerID="bad4caf1a1284e8d7ae753d52f99f3fdbce168352d03b484a7d82152191b81a3"
	Mar 07 21:53:47 addons-263000 kubelet[2390]: I0307 21:53:47.292272    2390 scope.go:117] "RemoveContainer" containerID="bad4caf1a1284e8d7ae753d52f99f3fdbce168352d03b484a7d82152191b81a3"
	Mar 07 21:53:47 addons-263000 kubelet[2390]: I0307 21:53:47.292447    2390 scope.go:117] "RemoveContainer" containerID="2cf20bdb1244ab4fec99a3d7045d6d2f55635c9ff7ff058f9c4e1355ec520b92"
	Mar 07 21:53:47 addons-263000 kubelet[2390]: E0307 21:53:47.292564    2390 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-4zz98_default(43c128c1-77c9-4fde-af18-22a07a3256e5)\"" pod="default/hello-world-app-5d77478584-4zz98" podUID="43c128c1-77c9-4fde-af18-22a07a3256e5"
	Mar 07 21:53:48 addons-263000 kubelet[2390]: I0307 21:53:48.300069    2390 scope.go:117] "RemoveContainer" containerID="2cf20bdb1244ab4fec99a3d7045d6d2f55635c9ff7ff058f9c4e1355ec520b92"
	Mar 07 21:53:48 addons-263000 kubelet[2390]: E0307 21:53:48.300190    2390 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-4zz98_default(43c128c1-77c9-4fde-af18-22a07a3256e5)\"" pod="default/hello-world-app-5d77478584-4zz98" podUID="43c128c1-77c9-4fde-af18-22a07a3256e5"
	Mar 07 21:53:48 addons-263000 kubelet[2390]: I0307 21:53:48.800742    2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqhlp\" (UniqueName: \"kubernetes.io/projected/2b265f89-7503-4f45-b8e5-edaee7812238-kube-api-access-bqhlp\") pod \"2b265f89-7503-4f45-b8e5-edaee7812238\" (UID: \"2b265f89-7503-4f45-b8e5-edaee7812238\") "
	Mar 07 21:53:48 addons-263000 kubelet[2390]: I0307 21:53:48.803595    2390 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b265f89-7503-4f45-b8e5-edaee7812238-kube-api-access-bqhlp" (OuterVolumeSpecName: "kube-api-access-bqhlp") pod "2b265f89-7503-4f45-b8e5-edaee7812238" (UID: "2b265f89-7503-4f45-b8e5-edaee7812238"). InnerVolumeSpecName "kube-api-access-bqhlp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 21:53:48 addons-263000 kubelet[2390]: I0307 21:53:48.901812    2390 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bqhlp\" (UniqueName: \"kubernetes.io/projected/2b265f89-7503-4f45-b8e5-edaee7812238-kube-api-access-bqhlp\") on node \"addons-263000\" DevicePath \"\""
	Mar 07 21:53:49 addons-263000 kubelet[2390]: I0307 21:53:49.306071    2390 scope.go:117] "RemoveContainer" containerID="438e1a87ca8b0b51200ffed80eb9e8d4b5fd0b19555db837dca2c41c8b440937"
	Mar 07 21:53:49 addons-263000 kubelet[2390]: I0307 21:53:49.569437    2390 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2b265f89-7503-4f45-b8e5-edaee7812238" path="/var/lib/kubelet/pods/2b265f89-7503-4f45-b8e5-edaee7812238/volumes"
	Mar 07 21:53:49 addons-263000 kubelet[2390]: I0307 21:53:49.569706    2390 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="75eabb00-9888-4a70-8131-4ee72f1c329e" path="/var/lib/kubelet/pods/75eabb00-9888-4a70-8131-4ee72f1c329e/volumes"
	Mar 07 21:53:49 addons-263000 kubelet[2390]: I0307 21:53:49.569862    2390 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7927b1b8-42ee-432d-8827-4e6765d8136c" path="/var/lib/kubelet/pods/7927b1b8-42ee-432d-8827-4e6765d8136c/volumes"
	Mar 07 21:53:52 addons-263000 kubelet[2390]: I0307 21:53:52.824292    2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb3b7868-d91d-4de7-85fc-1269827c053a-webhook-cert\") pod \"eb3b7868-d91d-4de7-85fc-1269827c053a\" (UID: \"eb3b7868-d91d-4de7-85fc-1269827c053a\") "
	Mar 07 21:53:52 addons-263000 kubelet[2390]: I0307 21:53:52.824326    2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mghl8\" (UniqueName: \"kubernetes.io/projected/eb3b7868-d91d-4de7-85fc-1269827c053a-kube-api-access-mghl8\") pod \"eb3b7868-d91d-4de7-85fc-1269827c053a\" (UID: \"eb3b7868-d91d-4de7-85fc-1269827c053a\") "
	Mar 07 21:53:52 addons-263000 kubelet[2390]: I0307 21:53:52.825094    2390 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb3b7868-d91d-4de7-85fc-1269827c053a-kube-api-access-mghl8" (OuterVolumeSpecName: "kube-api-access-mghl8") pod "eb3b7868-d91d-4de7-85fc-1269827c053a" (UID: "eb3b7868-d91d-4de7-85fc-1269827c053a"). InnerVolumeSpecName "kube-api-access-mghl8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 21:53:52 addons-263000 kubelet[2390]: I0307 21:53:52.825143    2390 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3b7868-d91d-4de7-85fc-1269827c053a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "eb3b7868-d91d-4de7-85fc-1269827c053a" (UID: "eb3b7868-d91d-4de7-85fc-1269827c053a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 07 21:53:52 addons-263000 kubelet[2390]: I0307 21:53:52.924373    2390 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb3b7868-d91d-4de7-85fc-1269827c053a-webhook-cert\") on node \"addons-263000\" DevicePath \"\""
	Mar 07 21:53:52 addons-263000 kubelet[2390]: I0307 21:53:52.924387    2390 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mghl8\" (UniqueName: \"kubernetes.io/projected/eb3b7868-d91d-4de7-85fc-1269827c053a-kube-api-access-mghl8\") on node \"addons-263000\" DevicePath \"\""
	Mar 07 21:53:53 addons-263000 kubelet[2390]: I0307 21:53:53.328952    2390 scope.go:117] "RemoveContainer" containerID="e498100ade0bcb9163922886c2a68e33e8da7627588fdeb9fb0667dd2055d881"
	Mar 07 21:53:53 addons-263000 kubelet[2390]: I0307 21:53:53.336132    2390 scope.go:117] "RemoveContainer" containerID="e498100ade0bcb9163922886c2a68e33e8da7627588fdeb9fb0667dd2055d881"
	Mar 07 21:53:53 addons-263000 kubelet[2390]: E0307 21:53:53.336420    2390 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e498100ade0bcb9163922886c2a68e33e8da7627588fdeb9fb0667dd2055d881" containerID="e498100ade0bcb9163922886c2a68e33e8da7627588fdeb9fb0667dd2055d881"
	Mar 07 21:53:53 addons-263000 kubelet[2390]: I0307 21:53:53.336441    2390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e498100ade0bcb9163922886c2a68e33e8da7627588fdeb9fb0667dd2055d881"} err="failed to get container status \"e498100ade0bcb9163922886c2a68e33e8da7627588fdeb9fb0667dd2055d881\": rpc error: code = Unknown desc = Error response from daemon: No such container: e498100ade0bcb9163922886c2a68e33e8da7627588fdeb9fb0667dd2055d881"
	Mar 07 21:53:53 addons-263000 kubelet[2390]: I0307 21:53:53.567435    2390 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="eb3b7868-d91d-4de7-85fc-1269827c053a" path="/var/lib/kubelet/pods/eb3b7868-d91d-4de7-85fc-1269827c053a/volumes"
	
	
	==> storage-provisioner [6715bc4652b8] <==
	I0307 21:49:39.319777       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 21:49:39.326528       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 21:49:39.326549       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 21:49:39.333845       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 21:49:39.333900       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-263000_c9051ca7-fdeb-4ad3-a0c2-7c1fc288699d!
	I0307 21:49:39.334275       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2593583d-60f1-4a0c-8607-de69c9c47787", APIVersion:"v1", ResourceVersion:"554", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-263000_c9051ca7-fdeb-4ad3-a0c2-7c1fc288699d became leader
	I0307 21:49:39.435210       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-263000_c9051ca7-fdeb-4ad3-a0c2-7c1fc288699d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-263000 -n addons-263000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-263000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (34.45s)

                                                
                                    
x
+
TestCertOptions (10.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-995000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-995000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.9500105s)

                                                
                                                
-- stdout --
	* [cert-options-995000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-995000" primary control-plane node in "cert-options-995000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-995000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-995000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-995000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-995000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-995000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (82.564417ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-995000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-995000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-995000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-995000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-995000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-995000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.290667ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-995000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-995000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-995000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-995000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-995000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-07 14:37:56.348016 -0800 PST m=+3088.843454585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-995000 -n cert-options-995000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-995000 -n cert-options-995000: exit status 7 (32.449834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-995000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-995000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-995000
--- FAIL: TestCertOptions (10.24s)

                                                
                                    
x
+
TestCertExpiration (195.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-072000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-072000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.03582775s)

                                                
                                                
-- stdout --
	* [cert-expiration-072000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-072000" primary control-plane node in "cert-expiration-072000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-072000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-072000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-072000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-072000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-072000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.202055166s)

                                                
                                                
-- stdout --
	* [cert-expiration-072000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-072000" primary control-plane node in "cert-expiration-072000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-072000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-072000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-072000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-072000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-072000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-072000" primary control-plane node in "cert-expiration-072000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-072000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-072000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-072000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-07 14:40:56.309868 -0800 PST m=+3268.804961085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-072000 -n cert-expiration-072000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-072000 -n cert-expiration-072000: exit status 7 (41.346042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-072000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-072000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-072000
--- FAIL: TestCertExpiration (195.39s)

                                                
                                    
x
+
TestDockerFlags (10.28s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-655000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-655000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.018819875s)

                                                
                                                
-- stdout --
	* [docker-flags-655000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-655000" primary control-plane node in "docker-flags-655000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-655000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:37:35.987766    4001 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:37:35.987877    4001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:37:35.987881    4001 out.go:304] Setting ErrFile to fd 2...
	I0307 14:37:35.987883    4001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:37:35.988007    4001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:37:35.989062    4001 out.go:298] Setting JSON to false
	I0307 14:37:36.005325    4001 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4012,"bootTime":1709847043,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:37:36.005402    4001 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:37:36.011430    4001 out.go:177] * [docker-flags-655000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:37:36.019439    4001 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:37:36.019568    4001 notify.go:220] Checking for updates...
	I0307 14:37:36.024429    4001 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:37:36.027330    4001 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:37:36.030386    4001 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:37:36.033473    4001 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:37:36.034896    4001 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:37:36.038770    4001 config.go:182] Loaded profile config "force-systemd-flag-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:37:36.038835    4001 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:37:36.038885    4001 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:37:36.043391    4001 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:37:36.049395    4001 start.go:297] selected driver: qemu2
	I0307 14:37:36.049401    4001 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:37:36.049409    4001 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:37:36.051657    4001 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:37:36.055414    4001 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:37:36.058485    4001 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0307 14:37:36.058520    4001 cni.go:84] Creating CNI manager for ""
	I0307 14:37:36.058528    4001 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:37:36.058533    4001 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 14:37:36.058566    4001 start.go:340] cluster config:
	{Name:docker-flags-655000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-655000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:37:36.063062    4001 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:37:36.071338    4001 out.go:177] * Starting "docker-flags-655000" primary control-plane node in "docker-flags-655000" cluster
	I0307 14:37:36.075380    4001 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:37:36.075404    4001 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:37:36.075417    4001 cache.go:56] Caching tarball of preloaded images
	I0307 14:37:36.075491    4001 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:37:36.075497    4001 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:37:36.075576    4001 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/docker-flags-655000/config.json ...
	I0307 14:37:36.075589    4001 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/docker-flags-655000/config.json: {Name:mkf5a519d629fa4714eb34332842d1349581ced1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:37:36.075818    4001 start.go:360] acquireMachinesLock for docker-flags-655000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:37:36.075851    4001 start.go:364] duration metric: took 27µs to acquireMachinesLock for "docker-flags-655000"
	I0307 14:37:36.075862    4001 start.go:93] Provisioning new machine with config: &{Name:docker-flags-655000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-655000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:37:36.075896    4001 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:37:36.084409    4001 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 14:37:36.101994    4001 start.go:159] libmachine.API.Create for "docker-flags-655000" (driver="qemu2")
	I0307 14:37:36.102025    4001 client.go:168] LocalClient.Create starting
	I0307 14:37:36.102094    4001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:37:36.102125    4001 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:36.102135    4001 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:36.102185    4001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:37:36.102207    4001 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:36.102213    4001 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:36.102574    4001 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:37:36.244284    4001 main.go:141] libmachine: Creating SSH key...
	I0307 14:37:36.433240    4001 main.go:141] libmachine: Creating Disk image...
	I0307 14:37:36.433250    4001 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:37:36.433426    4001 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/disk.qcow2
	I0307 14:37:36.445849    4001 main.go:141] libmachine: STDOUT: 
	I0307 14:37:36.445865    4001 main.go:141] libmachine: STDERR: 
	I0307 14:37:36.445918    4001 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/disk.qcow2 +20000M
	I0307 14:37:36.456496    4001 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:37:36.456514    4001 main.go:141] libmachine: STDERR: 
	I0307 14:37:36.456530    4001 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/disk.qcow2
	I0307 14:37:36.456537    4001 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:37:36.456580    4001 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:21:0a:15:61:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/disk.qcow2
	I0307 14:37:36.458308    4001 main.go:141] libmachine: STDOUT: 
	I0307 14:37:36.458324    4001 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:37:36.458354    4001 client.go:171] duration metric: took 356.311584ms to LocalClient.Create
	I0307 14:37:38.460570    4001 start.go:128] duration metric: took 2.384648125s to createHost
	I0307 14:37:38.460694    4001 start.go:83] releasing machines lock for "docker-flags-655000", held for 2.384825958s
	W0307 14:37:38.460753    4001 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:37:38.475817    4001 out.go:177] * Deleting "docker-flags-655000" in qemu2 ...
	W0307 14:37:38.511118    4001 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:37:38.511150    4001 start.go:728] Will try again in 5 seconds ...
	I0307 14:37:43.513347    4001 start.go:360] acquireMachinesLock for docker-flags-655000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:37:43.513692    4001 start.go:364] duration metric: took 256.917µs to acquireMachinesLock for "docker-flags-655000"
	I0307 14:37:43.513756    4001 start.go:93] Provisioning new machine with config: &{Name:docker-flags-655000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-655000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:37:43.514006    4001 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:37:43.537257    4001 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 14:37:43.585845    4001 start.go:159] libmachine.API.Create for "docker-flags-655000" (driver="qemu2")
	I0307 14:37:43.585927    4001 client.go:168] LocalClient.Create starting
	I0307 14:37:43.586153    4001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:37:43.586243    4001 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:43.586262    4001 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:43.586373    4001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:37:43.586426    4001 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:43.586445    4001 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:43.587451    4001 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:37:43.764698    4001 main.go:141] libmachine: Creating SSH key...
	I0307 14:37:43.905949    4001 main.go:141] libmachine: Creating Disk image...
	I0307 14:37:43.905955    4001 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:37:43.906149    4001 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/disk.qcow2
	I0307 14:37:43.918673    4001 main.go:141] libmachine: STDOUT: 
	I0307 14:37:43.918694    4001 main.go:141] libmachine: STDERR: 
	I0307 14:37:43.918738    4001 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/disk.qcow2 +20000M
	I0307 14:37:43.929476    4001 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:37:43.929497    4001 main.go:141] libmachine: STDERR: 
	I0307 14:37:43.929507    4001 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/disk.qcow2
	I0307 14:37:43.929522    4001 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:37:43.929560    4001 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:c4:52:b4:0a:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/docker-flags-655000/disk.qcow2
	I0307 14:37:43.931352    4001 main.go:141] libmachine: STDOUT: 
	I0307 14:37:43.931372    4001 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:37:43.931389    4001 client.go:171] duration metric: took 345.455542ms to LocalClient.Create
	I0307 14:37:45.933564    4001 start.go:128] duration metric: took 2.419525417s to createHost
	I0307 14:37:45.933663    4001 start.go:83] releasing machines lock for "docker-flags-655000", held for 2.419943625s
	W0307 14:37:45.934125    4001 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-655000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-655000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:37:45.943799    4001 out.go:177] 
	W0307 14:37:45.948815    4001 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:37:45.948840    4001 out.go:239] * 
	* 
	W0307 14:37:45.951475    4001 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:37:45.960817    4001 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-655000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-655000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-655000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.136625ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-655000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-655000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-655000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-655000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-655000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-655000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-655000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-655000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-655000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.707292ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-655000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-655000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-655000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-655000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-655000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-655000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-07 14:37:46.104922 -0800 PST m=+3078.600380418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-655000 -n docker-flags-655000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-655000 -n docker-flags-655000: exit status 7 (31.234958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-655000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-655000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-655000
--- FAIL: TestDockerFlags (10.28s)

                                                
                                    
x
+
TestForceSystemdFlag (10.04s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-843000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-843000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.819821625s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-843000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-843000" primary control-plane node in "force-systemd-flag-843000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-843000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:37:31.055231    3977 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:37:31.055339    3977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:37:31.055342    3977 out.go:304] Setting ErrFile to fd 2...
	I0307 14:37:31.055344    3977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:37:31.055465    3977 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:37:31.056546    3977 out.go:298] Setting JSON to false
	I0307 14:37:31.072571    3977 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4008,"bootTime":1709847043,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:37:31.072631    3977 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:37:31.079567    3977 out.go:177] * [force-systemd-flag-843000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:37:31.086525    3977 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:37:31.090504    3977 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:37:31.086621    3977 notify.go:220] Checking for updates...
	I0307 14:37:31.093532    3977 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:37:31.096481    3977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:37:31.099487    3977 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:37:31.102458    3977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:37:31.105732    3977 config.go:182] Loaded profile config "force-systemd-env-041000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:37:31.105800    3977 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:37:31.105856    3977 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:37:31.110442    3977 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:37:31.116425    3977 start.go:297] selected driver: qemu2
	I0307 14:37:31.116430    3977 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:37:31.116435    3977 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:37:31.118604    3977 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:37:31.121452    3977 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:37:31.124560    3977 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 14:37:31.124600    3977 cni.go:84] Creating CNI manager for ""
	I0307 14:37:31.124608    3977 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:37:31.124615    3977 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 14:37:31.124645    3977 start.go:340] cluster config:
	{Name:force-systemd-flag-843000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-843000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:37:31.128975    3977 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:37:31.136443    3977 out.go:177] * Starting "force-systemd-flag-843000" primary control-plane node in "force-systemd-flag-843000" cluster
	I0307 14:37:31.140531    3977 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:37:31.140549    3977 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:37:31.140558    3977 cache.go:56] Caching tarball of preloaded images
	I0307 14:37:31.140630    3977 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:37:31.140637    3977 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:37:31.140712    3977 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/force-systemd-flag-843000/config.json ...
	I0307 14:37:31.140728    3977 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/force-systemd-flag-843000/config.json: {Name:mk5d7339638b7e1a17a712057eedb32e6d0f1ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:37:31.140982    3977 start.go:360] acquireMachinesLock for force-systemd-flag-843000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:37:31.141021    3977 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "force-systemd-flag-843000"
	I0307 14:37:31.141034    3977 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-843000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:37:31.141092    3977 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:37:31.149457    3977 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 14:37:31.167251    3977 start.go:159] libmachine.API.Create for "force-systemd-flag-843000" (driver="qemu2")
	I0307 14:37:31.167284    3977 client.go:168] LocalClient.Create starting
	I0307 14:37:31.167364    3977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:37:31.167400    3977 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:31.167409    3977 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:31.167455    3977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:37:31.167477    3977 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:31.167487    3977 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:31.167857    3977 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:37:31.307970    3977 main.go:141] libmachine: Creating SSH key...
	I0307 14:37:31.390384    3977 main.go:141] libmachine: Creating Disk image...
	I0307 14:37:31.390389    3977 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:37:31.390570    3977 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/disk.qcow2
	I0307 14:37:31.403043    3977 main.go:141] libmachine: STDOUT: 
	I0307 14:37:31.403067    3977 main.go:141] libmachine: STDERR: 
	I0307 14:37:31.403121    3977 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/disk.qcow2 +20000M
	I0307 14:37:31.414522    3977 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:37:31.414538    3977 main.go:141] libmachine: STDERR: 
	I0307 14:37:31.414547    3977 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/disk.qcow2
	I0307 14:37:31.414554    3977 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:37:31.414580    3977 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:a5:10:f9:2d:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/disk.qcow2
	I0307 14:37:31.416371    3977 main.go:141] libmachine: STDOUT: 
	I0307 14:37:31.416399    3977 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:37:31.416417    3977 client.go:171] duration metric: took 249.125792ms to LocalClient.Create
	I0307 14:37:33.418646    3977 start.go:128] duration metric: took 2.27749175s to createHost
	I0307 14:37:33.418712    3977 start.go:83] releasing machines lock for "force-systemd-flag-843000", held for 2.277666833s
	W0307 14:37:33.418760    3977 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:37:33.444928    3977 out.go:177] * Deleting "force-systemd-flag-843000" in qemu2 ...
	W0307 14:37:33.464233    3977 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:37:33.464258    3977 start.go:728] Will try again in 5 seconds ...
	I0307 14:37:38.466482    3977 start.go:360] acquireMachinesLock for force-systemd-flag-843000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:37:38.466960    3977 start.go:364] duration metric: took 366.542µs to acquireMachinesLock for "force-systemd-flag-843000"
	I0307 14:37:38.467056    3977 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-843000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:37:38.467319    3977 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:37:38.486711    3977 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 14:37:38.534823    3977 start.go:159] libmachine.API.Create for "force-systemd-flag-843000" (driver="qemu2")
	I0307 14:37:38.534866    3977 client.go:168] LocalClient.Create starting
	I0307 14:37:38.534987    3977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:37:38.535039    3977 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:38.535053    3977 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:38.535113    3977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:37:38.535153    3977 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:38.535166    3977 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:38.535694    3977 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:37:38.690126    3977 main.go:141] libmachine: Creating SSH key...
	I0307 14:37:38.766001    3977 main.go:141] libmachine: Creating Disk image...
	I0307 14:37:38.766007    3977 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:37:38.766200    3977 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/disk.qcow2
	I0307 14:37:38.778334    3977 main.go:141] libmachine: STDOUT: 
	I0307 14:37:38.778353    3977 main.go:141] libmachine: STDERR: 
	I0307 14:37:38.778402    3977 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/disk.qcow2 +20000M
	I0307 14:37:38.788966    3977 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:37:38.788991    3977 main.go:141] libmachine: STDERR: 
	I0307 14:37:38.789003    3977 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/disk.qcow2
	I0307 14:37:38.789007    3977 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:37:38.789051    3977 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:2a:54:79:28:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-flag-843000/disk.qcow2
	I0307 14:37:38.790811    3977 main.go:141] libmachine: STDOUT: 
	I0307 14:37:38.790828    3977 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:37:38.790839    3977 client.go:171] duration metric: took 255.967125ms to LocalClient.Create
	I0307 14:37:40.793082    3977 start.go:128] duration metric: took 2.325731291s to createHost
	I0307 14:37:40.793139    3977 start.go:83] releasing machines lock for "force-systemd-flag-843000", held for 2.326146583s
	W0307 14:37:40.793438    3977 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-843000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-843000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:37:40.810198    3977 out.go:177] 
	W0307 14:37:40.817086    3977 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:37:40.817129    3977 out.go:239] * 
	* 
	W0307 14:37:40.819813    3977 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:37:40.831010    3977 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-843000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-843000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-843000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.411167ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-843000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-843000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-843000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-07 14:37:40.930339 -0800 PST m=+3073.425807001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-843000 -n force-systemd-flag-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-843000 -n force-systemd-flag-843000: exit status 7 (36.621666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-843000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-843000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-843000
--- FAIL: TestForceSystemdFlag (10.04s)

                                                
                                    
x
+
TestForceSystemdEnv (10.01s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-041000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-041000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.794784833s)

                                                
                                                
-- stdout --
	* [force-systemd-env-041000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-041000" primary control-plane node in "force-systemd-env-041000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-041000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:37:25.973720    3941 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:37:25.973835    3941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:37:25.973838    3941 out.go:304] Setting ErrFile to fd 2...
	I0307 14:37:25.973840    3941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:37:25.973985    3941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:37:25.975291    3941 out.go:298] Setting JSON to false
	I0307 14:37:25.992655    3941 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4002,"bootTime":1709847043,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:37:25.992724    3941 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:37:25.997202    3941 out.go:177] * [force-systemd-env-041000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:37:26.010162    3941 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:37:26.005193    3941 notify.go:220] Checking for updates...
	I0307 14:37:26.018110    3941 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:37:26.026098    3941 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:37:26.033974    3941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:37:26.042097    3941 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:37:26.049049    3941 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0307 14:37:26.053464    3941 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:37:26.053517    3941 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:37:26.058105    3941 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:37:26.065118    3941 start.go:297] selected driver: qemu2
	I0307 14:37:26.065123    3941 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:37:26.065129    3941 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:37:26.067550    3941 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:37:26.071105    3941 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:37:26.075195    3941 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 14:37:26.075232    3941 cni.go:84] Creating CNI manager for ""
	I0307 14:37:26.075244    3941 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:37:26.075250    3941 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 14:37:26.075274    3941 start.go:340] cluster config:
	{Name:force-systemd-env-041000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-041000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:37:26.079755    3941 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:37:26.087124    3941 out.go:177] * Starting "force-systemd-env-041000" primary control-plane node in "force-systemd-env-041000" cluster
	I0307 14:37:26.091087    3941 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:37:26.091103    3941 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:37:26.091113    3941 cache.go:56] Caching tarball of preloaded images
	I0307 14:37:26.091171    3941 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:37:26.091177    3941 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:37:26.091252    3941 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/force-systemd-env-041000/config.json ...
	I0307 14:37:26.091263    3941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/force-systemd-env-041000/config.json: {Name:mk1d7ba6743fdd12eb3fa8009c9540e35ba70c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:37:26.091472    3941 start.go:360] acquireMachinesLock for force-systemd-env-041000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:37:26.091504    3941 start.go:364] duration metric: took 25.459µs to acquireMachinesLock for "force-systemd-env-041000"
	I0307 14:37:26.091514    3941 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-041000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-041000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:37:26.091545    3941 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:37:26.099162    3941 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 14:37:26.115893    3941 start.go:159] libmachine.API.Create for "force-systemd-env-041000" (driver="qemu2")
	I0307 14:37:26.115928    3941 client.go:168] LocalClient.Create starting
	I0307 14:37:26.115996    3941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:37:26.116022    3941 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:26.116032    3941 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:26.116078    3941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:37:26.116102    3941 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:26.116108    3941 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:26.116435    3941 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:37:26.259028    3941 main.go:141] libmachine: Creating SSH key...
	I0307 14:37:26.314087    3941 main.go:141] libmachine: Creating Disk image...
	I0307 14:37:26.314097    3941 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:37:26.314266    3941 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/disk.qcow2
	I0307 14:37:26.327295    3941 main.go:141] libmachine: STDOUT: 
	I0307 14:37:26.327317    3941 main.go:141] libmachine: STDERR: 
	I0307 14:37:26.327380    3941 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/disk.qcow2 +20000M
	I0307 14:37:26.339212    3941 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:37:26.339230    3941 main.go:141] libmachine: STDERR: 
	I0307 14:37:26.339254    3941 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/disk.qcow2
	I0307 14:37:26.339260    3941 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:37:26.339292    3941 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:b5:97:d2:02:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/disk.qcow2
	I0307 14:37:26.341146    3941 main.go:141] libmachine: STDOUT: 
	I0307 14:37:26.341172    3941 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:37:26.341192    3941 client.go:171] duration metric: took 225.25825ms to LocalClient.Create
	I0307 14:37:28.343417    3941 start.go:128] duration metric: took 2.251835625s to createHost
	I0307 14:37:28.343547    3941 start.go:83] releasing machines lock for "force-systemd-env-041000", held for 2.252015709s
	W0307 14:37:28.343644    3941 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:37:28.350725    3941 out.go:177] * Deleting "force-systemd-env-041000" in qemu2 ...
	W0307 14:37:28.381528    3941 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:37:28.381566    3941 start.go:728] Will try again in 5 seconds ...
	I0307 14:37:33.383801    3941 start.go:360] acquireMachinesLock for force-systemd-env-041000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:37:33.418794    3941 start.go:364] duration metric: took 34.871292ms to acquireMachinesLock for "force-systemd-env-041000"
	I0307 14:37:33.418946    3941 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-041000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-041000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:37:33.419182    3941 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:37:33.433846    3941 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 14:37:33.482574    3941 start.go:159] libmachine.API.Create for "force-systemd-env-041000" (driver="qemu2")
	I0307 14:37:33.482624    3941 client.go:168] LocalClient.Create starting
	I0307 14:37:33.482792    3941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:37:33.482858    3941 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:33.482877    3941 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:33.482945    3941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:37:33.482986    3941 main.go:141] libmachine: Decoding PEM data...
	I0307 14:37:33.482997    3941 main.go:141] libmachine: Parsing certificate...
	I0307 14:37:33.483524    3941 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:37:33.639060    3941 main.go:141] libmachine: Creating SSH key...
	I0307 14:37:33.664648    3941 main.go:141] libmachine: Creating Disk image...
	I0307 14:37:33.664653    3941 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:37:33.664832    3941 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/disk.qcow2
	I0307 14:37:33.677058    3941 main.go:141] libmachine: STDOUT: 
	I0307 14:37:33.677081    3941 main.go:141] libmachine: STDERR: 
	I0307 14:37:33.677132    3941 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/disk.qcow2 +20000M
	I0307 14:37:33.688094    3941 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:37:33.688131    3941 main.go:141] libmachine: STDERR: 
	I0307 14:37:33.688139    3941 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/disk.qcow2
	I0307 14:37:33.688144    3941 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:37:33.688179    3941 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:c1:87:8f:4b:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/force-systemd-env-041000/disk.qcow2
	I0307 14:37:33.690038    3941 main.go:141] libmachine: STDOUT: 
	I0307 14:37:33.690054    3941 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:37:33.690066    3941 client.go:171] duration metric: took 207.436083ms to LocalClient.Create
	I0307 14:37:35.692281    3941 start.go:128] duration metric: took 2.273063s to createHost
	I0307 14:37:35.692358    3941 start.go:83] releasing machines lock for "force-systemd-env-041000", held for 2.273535625s
	W0307 14:37:35.692742    3941 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-041000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-041000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:37:35.705368    3941 out.go:177] 
	W0307 14:37:35.709406    3941 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:37:35.709455    3941 out.go:239] * 
	* 
	W0307 14:37:35.712382    3941 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:37:35.722115    3941 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-041000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-041000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-041000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.418167ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-041000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-041000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-041000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-07 14:37:35.818501 -0800 PST m=+3068.313979793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-041000 -n force-systemd-env-041000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-041000 -n force-systemd-env-041000: exit status 7 (34.990291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-041000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-041000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-041000
--- FAIL: TestForceSystemdEnv (10.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (39.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-836000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-836000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-gfnrd" [ec5c5794-652a-4971-b709-4333f321052d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-gfnrd" [ec5c5794-652a-4971-b709-4333f321052d] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.003572292s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:32424
functional_test.go:1657: error fetching http://192.168.105.4:32424: Get "http://192.168.105.4:32424": dial tcp 192.168.105.4:32424: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32424: Get "http://192.168.105.4:32424": dial tcp 192.168.105.4:32424: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32424: Get "http://192.168.105.4:32424": dial tcp 192.168.105.4:32424: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32424: Get "http://192.168.105.4:32424": dial tcp 192.168.105.4:32424: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32424: Get "http://192.168.105.4:32424": dial tcp 192.168.105.4:32424: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32424: Get "http://192.168.105.4:32424": dial tcp 192.168.105.4:32424: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32424: Get "http://192.168.105.4:32424": dial tcp 192.168.105.4:32424: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:32424: Get "http://192.168.105.4:32424": dial tcp 192.168.105.4:32424: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-836000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-gfnrd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-836000/192.168.105.4
Start Time:       Thu, 07 Mar 2024 13:58:28 -0800
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://27ae7bcacdb2e20876c2ff4fca003a402e7f8657ab05a11d21238d1f11eabf2b
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 07 Mar 2024 13:58:52 -0800
Finished:     Thu, 07 Mar 2024 13:58:52 -0800
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qnk9k (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-qnk9k:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  38s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-gfnrd to functional-836000
Normal   Pulling    38s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     31s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 6.434s (6.434s including waiting)
Normal   Created    14s (x3 over 31s)  kubelet            Created container echoserver-arm
Normal   Started    14s (x3 over 31s)  kubelet            Started container echoserver-arm
Normal   Pulled     14s (x2 over 30s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    3s (x4 over 29s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-gfnrd_default(ec5c5794-652a-4971-b709-4333f321052d)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-836000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-836000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.89.210
IPs:                      10.110.89.210
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32424/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-836000 -n functional-836000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-836000 ssh stat                                                                                           | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:58 PST | 07 Mar 24 13:58 PST |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh sudo                                                                                           | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:58 PST | 07 Mar 24 13:58 PST |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-836000                                                                                                 | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:58 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3997279170/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh findmnt                                                                                        | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:58 PST |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh findmnt                                                                                        | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST | 07 Mar 24 13:59 PST |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh -- ls                                                                                          | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST | 07 Mar 24 13:59 PST |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh sudo                                                                                           | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-836000                                                                                                 | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2577347136/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-836000                                                                                                 | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2577347136/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-836000                                                                                                 | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2577347136/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh findmnt                                                                                        | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh findmnt                                                                                        | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST | 07 Mar 24 13:59 PST |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh findmnt                                                                                        | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST | 07 Mar 24 13:59 PST |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh findmnt                                                                                        | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST |                     |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh findmnt                                                                                        | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST | 07 Mar 24 13:59 PST |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh findmnt                                                                                        | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST | 07 Mar 24 13:59 PST |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh findmnt                                                                                        | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST |                     |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh findmnt                                                                                        | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST | 07 Mar 24 13:59 PST |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh findmnt                                                                                        | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST | 07 Mar 24 13:59 PST |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-836000 ssh findmnt                                                                                        | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST | 07 Mar 24 13:59 PST |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-836000                                                                                                 | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-836000                                                                                                 | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-836000                                                                                                 | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-836000 --dry-run                                                                                       | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-836000 | jenkins | v1.32.0 | 07 Mar 24 13:59 PST |                     |
	|           | -p functional-836000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 13:59:03
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 13:59:03.594546    2396 out.go:291] Setting OutFile to fd 1 ...
	I0307 13:59:03.594919    2396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:59:03.594925    2396 out.go:304] Setting ErrFile to fd 2...
	I0307 13:59:03.594928    2396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:59:03.595107    2396 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 13:59:03.596485    2396 out.go:298] Setting JSON to false
	I0307 13:59:03.614010    2396 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1700,"bootTime":1709847043,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 13:59:03.614072    2396 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 13:59:03.619306    2396 out.go:177] * [functional-836000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 13:59:03.626410    2396 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 13:59:03.626462    2396 notify.go:220] Checking for updates...
	I0307 13:59:03.633409    2396 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 13:59:03.636389    2396 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 13:59:03.639433    2396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 13:59:03.646436    2396 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 13:59:03.653358    2396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 13:59:03.657617    2396 config.go:182] Loaded profile config "functional-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 13:59:03.657870    2396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 13:59:03.662269    2396 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 13:59:03.669383    2396 start.go:297] selected driver: qemu2
	I0307 13:59:03.669389    2396 start.go:901] validating driver "qemu2" against &{Name:functional-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 13:59:03.669451    2396 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 13:59:03.671647    2396 cni.go:84] Creating CNI manager for ""
	I0307 13:59:03.671665    2396 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 13:59:03.671708    2396 start.go:340] cluster config:
	{Name:functional-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-836000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 13:59:03.686377    2396 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Mar 07 21:58:57 functional-836000 dockerd[7034]: time="2024-03-07T21:58:57.868871090Z" level=info msg="ignoring event" container=4b3363a4604a9e6cbdd8259e99fa7fe9d91fa75e22d356e896b8b531f4b5da78 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 21:58:57 functional-836000 dockerd[7040]: time="2024-03-07T21:58:57.868982465Z" level=warning msg="cleaning up after shim disconnected" id=4b3363a4604a9e6cbdd8259e99fa7fe9d91fa75e22d356e896b8b531f4b5da78 namespace=moby
	Mar 07 21:58:57 functional-836000 dockerd[7040]: time="2024-03-07T21:58:57.868992548Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 21:58:57 functional-836000 dockerd[7040]: time="2024-03-07T21:58:57.872930749Z" level=warning msg="cleanup warnings time=\"2024-03-07T21:58:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Mar 07 21:58:59 functional-836000 dockerd[7034]: time="2024-03-07T21:58:59.392947051Z" level=info msg="ignoring event" container=2632c5b685031f404350439dcc1c02fe4247869572b8a8413d7c73b8bd8e39b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 21:58:59 functional-836000 dockerd[7040]: time="2024-03-07T21:58:59.393155551Z" level=info msg="shim disconnected" id=2632c5b685031f404350439dcc1c02fe4247869572b8a8413d7c73b8bd8e39b1 namespace=moby
	Mar 07 21:58:59 functional-836000 dockerd[7040]: time="2024-03-07T21:58:59.393210467Z" level=warning msg="cleaning up after shim disconnected" id=2632c5b685031f404350439dcc1c02fe4247869572b8a8413d7c73b8bd8e39b1 namespace=moby
	Mar 07 21:58:59 functional-836000 dockerd[7040]: time="2024-03-07T21:58:59.393215509Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 21:59:04 functional-836000 dockerd[7040]: time="2024-03-07T21:59:04.689255782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 21:59:04 functional-836000 dockerd[7040]: time="2024-03-07T21:59:04.689462740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 21:59:04 functional-836000 dockerd[7040]: time="2024-03-07T21:59:04.689492239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 21:59:04 functional-836000 dockerd[7040]: time="2024-03-07T21:59:04.689601864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 21:59:04 functional-836000 dockerd[7040]: time="2024-03-07T21:59:04.729275543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 21:59:04 functional-836000 dockerd[7040]: time="2024-03-07T21:59:04.729313918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 21:59:04 functional-836000 dockerd[7040]: time="2024-03-07T21:59:04.729319377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 21:59:04 functional-836000 dockerd[7040]: time="2024-03-07T21:59:04.729362168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 21:59:04 functional-836000 cri-dockerd[7296]: time="2024-03-07T21:59:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d9096f302b77258a73f00c68e1adb0bb17e96476f80a1d9d20a43e423f160ea5/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 07 21:59:04 functional-836000 cri-dockerd[7296]: time="2024-03-07T21:59:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/def2a341c90bcd3a6674f4ee4da06a84d2e17114df104b93a18c2eb1d8a892bd/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 07 21:59:04 functional-836000 dockerd[7034]: time="2024-03-07T21:59:04.996452662Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Mar 07 21:59:06 functional-836000 cri-dockerd[7296]: time="2024-03-07T21:59:06Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Mar 07 21:59:06 functional-836000 dockerd[7040]: time="2024-03-07T21:59:06.511225743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 21:59:06 functional-836000 dockerd[7040]: time="2024-03-07T21:59:06.511271534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 21:59:06 functional-836000 dockerd[7040]: time="2024-03-07T21:59:06.511277451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 21:59:06 functional-836000 dockerd[7040]: time="2024-03-07T21:59:06.511304034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 21:59:06 functional-836000 dockerd[7034]: time="2024-03-07T21:59:06.658930023Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	3cb8089e490c5       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   1 second ago         Running             dashboard-metrics-scraper   0                   d9096f302b772       dashboard-metrics-scraper-7fd5cb4ddc-9mwzs
	4b3363a4604a9       72565bf5bbedf                                                                                          10 seconds ago       Exited              echoserver-arm              2                   4cce57943a740       hello-node-759d89bdcc-2htjx
	5cb93fe3154ca       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    10 seconds ago       Exited              mount-munger                0                   2632c5b685031       busybox-mount
	27ae7bcacdb2e       72565bf5bbedf                                                                                          15 seconds ago       Exited              echoserver-arm              2                   f5b57944596cc       hello-node-connect-7799dfb7c6-gfnrd
	3f6323894de01       nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107                          30 seconds ago       Running             myfrontend                  0                   7fcbee428c5ca       sp-pod
	8b2b7e7e8a201       nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                          47 seconds ago       Running             nginx                       0                   7aa42a6b645df       nginx-svc
	a8555009ff8d7       97e04611ad434                                                                                          About a minute ago   Running             coredns                     2                   d52171e46bed0       coredns-5dd5756b68-zrdj8
	6bf2cc1b4b869       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         2                   6744a7ad8ca81       storage-provisioner
	659988e2b5e61       3ca3ca488cf13                                                                                          About a minute ago   Running             kube-proxy                  2                   e1ae96c6ede5a       kube-proxy-dx544
	7c54af5852907       9cdd6470f48c8                                                                                          About a minute ago   Running             etcd                        2                   335db5b520297       etcd-functional-836000
	305f47c4b3380       9961cbceaf234                                                                                          About a minute ago   Running             kube-controller-manager     2                   05d5f93cfd4a0       kube-controller-manager-functional-836000
	400735fe39787       05c284c929889                                                                                          About a minute ago   Running             kube-scheduler              2                   9f8fc1989491b       kube-scheduler-functional-836000
	65563d3fd4830       04b4c447bb9d4                                                                                          About a minute ago   Running             kube-apiserver              0                   59a031a7de65e       kube-apiserver-functional-836000
	45aa3aaee8a98       97e04611ad434                                                                                          2 minutes ago        Exited              coredns                     1                   a024b177c0c3d       coredns-5dd5756b68-zrdj8
	c3bf730870421       ba04bb24b9575                                                                                          2 minutes ago        Exited              storage-provisioner         1                   afae5b10ba0e2       storage-provisioner
	9f88217f5abc2       3ca3ca488cf13                                                                                          2 minutes ago        Exited              kube-proxy                  1                   3c000a3fdcb24       kube-proxy-dx544
	7890324dcf522       9cdd6470f48c8                                                                                          2 minutes ago        Exited              etcd                        1                   d4f05b36e8ed4       etcd-functional-836000
	7665b38b1ce37       05c284c929889                                                                                          2 minutes ago        Exited              kube-scheduler              1                   3016947ea2795       kube-scheduler-functional-836000
	50ef766f33a4a       9961cbceaf234                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   ae46771d3bca6       kube-controller-manager-functional-836000
	
	
	==> coredns [45aa3aaee8a9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44139 - 36160 "HINFO IN 3452098514840508433.544491595962185437. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.009178901s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a8555009ff8d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33094 - 12343 "HINFO IN 3951182610231781618.7903230198375525754. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009296936s
	[INFO] 10.244.0.1:25634 - 53950 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000094875s
	[INFO] 10.244.0.1:17804 - 14647 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000088875s
	[INFO] 10.244.0.1:26340 - 26852 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000022083s
	[INFO] 10.244.0.1:18252 - 3741 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000938082s
	[INFO] 10.244.0.1:59867 - 12308 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.00005075s
	[INFO] 10.244.0.1:30037 - 44446 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000054333s
	
	
	==> describe nodes <==
	Name:               functional-836000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-836000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e3656b8cff33aafa60dd2a07a4b34bce666a6a6
	                    minikube.k8s.io/name=functional-836000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T13_56_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 21:56:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-836000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 21:59:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 21:58:54 +0000   Thu, 07 Mar 2024 21:56:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 21:58:54 +0000   Thu, 07 Mar 2024 21:56:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 21:58:54 +0000   Thu, 07 Mar 2024 21:56:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 21:58:54 +0000   Thu, 07 Mar 2024 21:56:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-836000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2e65d77d49641e4a4a956ece2fea5ab
	  System UUID:                d2e65d77d49641e4a4a956ece2fea5ab
	  Boot ID:                    0042e42e-411e-40ab-a99b-9d4b0dd9908f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-2htjx                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  default                     hello-node-connect-7799dfb7c6-gfnrd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 coredns-5dd5756b68-zrdj8                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m23s
	  kube-system                 etcd-functional-836000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m36s
	  kube-system                 kube-apiserver-functional-836000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-functional-836000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 kube-proxy-dx544                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 kube-scheduler-functional-836000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-9mwzs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-scpwb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m22s                  kube-proxy       
	  Normal  Starting                 72s                    kube-proxy       
	  Normal  Starting                 2m                     kube-proxy       
	  Normal  Starting                 2m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m41s (x8 over 2m41s)  kubelet          Node functional-836000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m41s (x8 over 2m41s)  kubelet          Node functional-836000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m41s (x7 over 2m41s)  kubelet          Node functional-836000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m36s                  kubelet          Node functional-836000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m36s                  kubelet          Node functional-836000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m36s                  kubelet          Node functional-836000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m36s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m32s                  kubelet          Node functional-836000 status is now: NodeReady
	  Normal  RegisteredNode           2m24s                  node-controller  Node functional-836000 event: Registered Node functional-836000 in Controller
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)    kubelet          Node functional-836000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)    kubelet          Node functional-836000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)    kubelet          Node functional-836000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           109s                   node-controller  Node functional-836000 event: Registered Node functional-836000 in Controller
	  Normal  Starting                 77s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)      kubelet          Node functional-836000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)      kubelet          Node functional-836000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)      kubelet          Node functional-836000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                    node-controller  Node functional-836000 event: Registered Node functional-836000 in Controller
	
	
	==> dmesg <==
	[ +11.252878] kauditd_printk_skb: 27 callbacks suppressed
	[  +1.571281] systemd-fstab-generator[5932]: Ignoring "noauto" option for root device
	[ +17.997753] systemd-fstab-generator[6568]: Ignoring "noauto" option for root device
	[  +0.054983] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.102650] systemd-fstab-generator[6602]: Ignoring "noauto" option for root device
	[  +0.102019] systemd-fstab-generator[6614]: Ignoring "noauto" option for root device
	[  +0.094819] systemd-fstab-generator[6628]: Ignoring "noauto" option for root device
	[  +5.101476] kauditd_printk_skb: 89 callbacks suppressed
	[  +6.354883] systemd-fstab-generator[7188]: Ignoring "noauto" option for root device
	[  +0.083288] systemd-fstab-generator[7200]: Ignoring "noauto" option for root device
	[  +0.082311] systemd-fstab-generator[7212]: Ignoring "noauto" option for root device
	[  +0.081199] systemd-fstab-generator[7232]: Ignoring "noauto" option for root device
	[  +0.216838] systemd-fstab-generator[7437]: Ignoring "noauto" option for root device
	[  +0.786101] systemd-fstab-generator[7555]: Ignoring "noauto" option for root device
	[  +4.359204] kauditd_printk_skb: 202 callbacks suppressed
	[Mar 7 21:58] kauditd_printk_skb: 27 callbacks suppressed
	[  +2.338172] systemd-fstab-generator[8741]: Ignoring "noauto" option for root device
	[  +4.421352] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.232005] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.942255] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.393365] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.933871] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.458681] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.762736] kauditd_printk_skb: 6 callbacks suppressed
	[Mar 7 21:59] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [7890324dcf52] <==
	{"level":"info","ts":"2024-03-07T21:57:04.15721Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-07T21:57:05.343705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-07T21:57:05.343852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-07T21:57:05.343905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-03-07T21:57:05.343942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-03-07T21:57:05.343959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-07T21:57:05.343985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-03-07T21:57:05.344015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-07T21:57:05.348638Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T21:57:05.348655Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-836000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-07T21:57:05.349432Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T21:57:05.351471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-03-07T21:57:05.351481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-07T21:57:05.351619Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-07T21:57:05.353302Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-07T21:57:38.054828Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-07T21:57:38.054854Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-836000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-03-07T21:57:38.054925Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-07T21:57:38.054966Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-07T21:57:38.069595Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-07T21:57:38.069686Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-07T21:57:38.069766Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-03-07T21:57:38.071429Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-07T21:57:38.071506Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-07T21:57:38.071522Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-836000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [7c54af585290] <==
	{"level":"info","ts":"2024-03-07T21:57:52.197762Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-07T21:57:52.197784Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-07T21:57:52.19789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-03-07T21:57:52.197926Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-03-07T21:57:52.197986Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:57:52.198012Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:57:52.199394Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-07T21:57:52.201231Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-07T21:57:52.202485Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-07T21:57:52.202576Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-07T21:57:52.202606Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-07T21:57:53.170555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-03-07T21:57:53.170692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-03-07T21:57:53.170737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-07T21:57:53.170769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-03-07T21:57:53.170793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-03-07T21:57:53.170835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-03-07T21:57:53.170855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-03-07T21:57:53.17562Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-836000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-07T21:57:53.175696Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T21:57:53.175819Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-07T21:57:53.175845Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-07T21:57:53.175874Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T21:57:53.178434Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-03-07T21:57:53.179143Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:59:07 up 2 min,  0 users,  load average: 0.40, 0.27, 0.11
	Linux functional-836000 5.10.207 #1 SMP PREEMPT Thu Feb 22 23:40:42 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [65563d3fd483] <==
	I0307 21:57:53.841382       1 aggregator.go:166] initial CRD sync complete...
	I0307 21:57:53.841397       1 autoregister_controller.go:141] Starting autoregister controller
	I0307 21:57:53.841416       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0307 21:57:53.841429       1 cache.go:39] Caches are synced for autoregister controller
	I0307 21:57:53.841889       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0307 21:57:53.842385       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0307 21:57:53.842408       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E0307 21:57:53.844455       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0307 21:57:53.892682       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0307 21:57:54.742100       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0307 21:57:54.870124       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0307 21:57:54.873380       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0307 21:57:54.886711       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0307 21:57:54.898449       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0307 21:57:54.900678       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0307 21:58:06.029687       1 controller.go:624] quota admission added evaluator for: endpoints
	I0307 21:58:06.130547       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0307 21:58:12.889282       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.92.133"}
	I0307 21:58:17.777684       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.241.157"}
	I0307 21:58:28.174316       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0307 21:58:28.220046       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.89.210"}
	I0307 21:58:43.391337       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.245.48"}
	I0307 21:59:04.268174       1 controller.go:624] quota admission added evaluator for: namespaces
	I0307 21:59:04.332743       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.191.234"}
	I0307 21:59:04.367185       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.175.24"}
	
	
	==> kube-controller-manager [305f47c4b338] <==
	E0307 21:59:04.303948       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0307 21:59:04.304051       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.776725ms"
	E0307 21:59:04.304063       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0307 21:59:04.306539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="2.571451ms"
	E0307 21:59:04.306866       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0307 21:59:04.306854       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0307 21:59:04.307820       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="2.34991ms"
	E0307 21:59:04.307938       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0307 21:59:04.307930       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0307 21:59:04.310870       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="1.812162ms"
	E0307 21:59:04.310881       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0307 21:59:04.310893       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0307 21:59:04.312399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="1.780204ms"
	E0307 21:59:04.312458       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0307 21:59:04.312450       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0307 21:59:04.333776       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-9mwzs"
	I0307 21:59:04.337320       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="5.576359ms"
	I0307 21:59:04.349696       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-scpwb"
	I0307 21:59:04.363889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="30.01804ms"
	I0307 21:59:04.365726       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="28.221837ms"
	I0307 21:59:04.365816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="27.792µs"
	I0307 21:59:04.377378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.383462ms"
	I0307 21:59:04.377402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.125µs"
	I0307 21:59:07.381630       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="4.319245ms"
	I0307 21:59:07.381655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="13µs"
	
	
	==> kube-controller-manager [50ef766f33a4] <==
	I0307 21:57:18.084497       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0307 21:57:18.086665       1 shared_informer.go:318] Caches are synced for HPA
	I0307 21:57:18.088882       1 shared_informer.go:318] Caches are synced for taint
	I0307 21:57:18.088931       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0307 21:57:18.088934       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0307 21:57:18.088976       1 taint_manager.go:210] "Sending events to api server"
	I0307 21:57:18.088962       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-836000"
	I0307 21:57:18.089066       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0307 21:57:18.089116       1 shared_informer.go:318] Caches are synced for persistent volume
	I0307 21:57:18.089127       1 event.go:307] "Event occurred" object="functional-836000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-836000 event: Registered Node functional-836000 in Controller"
	I0307 21:57:18.097647       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0307 21:57:18.098810       1 shared_informer.go:318] Caches are synced for TTL
	I0307 21:57:18.099892       1 shared_informer.go:318] Caches are synced for expand
	I0307 21:57:18.100963       1 shared_informer.go:318] Caches are synced for service account
	I0307 21:57:18.103129       1 shared_informer.go:318] Caches are synced for stateful set
	I0307 21:57:18.150535       1 shared_informer.go:318] Caches are synced for attach detach
	I0307 21:57:18.187848       1 shared_informer.go:318] Caches are synced for disruption
	I0307 21:57:18.196054       1 shared_informer.go:318] Caches are synced for deployment
	I0307 21:57:18.198156       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0307 21:57:18.198226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.333µs"
	I0307 21:57:18.204205       1 shared_informer.go:318] Caches are synced for resource quota
	I0307 21:57:18.295783       1 shared_informer.go:318] Caches are synced for resource quota
	I0307 21:57:18.612951       1 shared_informer.go:318] Caches are synced for garbage collector
	I0307 21:57:18.650140       1 shared_informer.go:318] Caches are synced for garbage collector
	I0307 21:57:18.650152       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-proxy [659988e2b5e6] <==
	I0307 21:57:55.149827       1 server_others.go:69] "Using iptables proxy"
	I0307 21:57:55.154254       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0307 21:57:55.162443       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0307 21:57:55.162455       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0307 21:57:55.163075       1 server_others.go:152] "Using iptables Proxier"
	I0307 21:57:55.163095       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 21:57:55.163163       1 server.go:846] "Version info" version="v1.28.4"
	I0307 21:57:55.163171       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 21:57:55.163616       1 config.go:188] "Starting service config controller"
	I0307 21:57:55.163652       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 21:57:55.163676       1 config.go:97] "Starting endpoint slice config controller"
	I0307 21:57:55.163690       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 21:57:55.163855       1 config.go:315] "Starting node config controller"
	I0307 21:57:55.163875       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 21:57:55.265093       1 shared_informer.go:318] Caches are synced for node config
	I0307 21:57:55.265093       1 shared_informer.go:318] Caches are synced for service config
	I0307 21:57:55.265101       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [9f88217f5abc] <==
	I0307 21:57:06.977267       1 server_others.go:69] "Using iptables proxy"
	I0307 21:57:06.985717       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0307 21:57:07.007475       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0307 21:57:07.007486       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0307 21:57:07.008509       1 server_others.go:152] "Using iptables Proxier"
	I0307 21:57:07.008608       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 21:57:07.008762       1 server.go:846] "Version info" version="v1.28.4"
	I0307 21:57:07.008770       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 21:57:07.009131       1 config.go:188] "Starting service config controller"
	I0307 21:57:07.009143       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 21:57:07.009150       1 config.go:97] "Starting endpoint slice config controller"
	I0307 21:57:07.009152       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 21:57:07.009516       1 config.go:315] "Starting node config controller"
	I0307 21:57:07.009736       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 21:57:07.109917       1 shared_informer.go:318] Caches are synced for node config
	I0307 21:57:07.109939       1 shared_informer.go:318] Caches are synced for service config
	I0307 21:57:07.110030       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [400735fe3978] <==
	I0307 21:57:51.733259       1 serving.go:348] Generated self-signed cert in-memory
	W0307 21:57:53.764974       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0307 21:57:53.764989       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 21:57:53.764993       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0307 21:57:53.765006       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0307 21:57:53.817849       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0307 21:57:53.817891       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 21:57:53.818734       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0307 21:57:53.818783       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0307 21:57:53.818797       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 21:57:53.818805       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0307 21:57:53.919895       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7665b38b1ce3] <==
	I0307 21:57:04.442718       1 serving.go:348] Generated self-signed cert in-memory
	W0307 21:57:05.965506       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0307 21:57:05.965601       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 21:57:05.965636       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0307 21:57:05.965652       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0307 21:57:05.983145       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0307 21:57:05.983259       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 21:57:05.984293       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0307 21:57:05.984320       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 21:57:05.984585       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0307 21:57:05.984645       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0307 21:57:06.084829       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 21:57:38.069173       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0307 21:57:38.069220       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0307 21:57:38.069283       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 07 21:58:53 functional-836000 kubelet[7562]: I0307 21:58:53.280059    7562 scope.go:117] "RemoveContainer" containerID="5eed9a8372ade809c5b1a95a618675e4bec60daec1a587a3ae756099d255338a"
	Mar 07 21:58:53 functional-836000 kubelet[7562]: I0307 21:58:53.280225    7562 scope.go:117] "RemoveContainer" containerID="27ae7bcacdb2e20876c2ff4fca003a402e7f8657ab05a11d21238d1f11eabf2b"
	Mar 07 21:58:53 functional-836000 kubelet[7562]: E0307 21:58:53.280321    7562 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-gfnrd_default(ec5c5794-652a-4971-b709-4333f321052d)\"" pod="default/hello-node-connect-7799dfb7c6-gfnrd" podUID="ec5c5794-652a-4971-b709-4333f321052d"
	Mar 07 21:58:57 functional-836000 kubelet[7562]: I0307 21:58:57.818034    7562 scope.go:117] "RemoveContainer" containerID="b1bcb645e4d7b4ad6c3bb75e1172967faf7713e7ddf3b10a095d3feb04720236"
	Mar 07 21:58:58 functional-836000 kubelet[7562]: I0307 21:58:58.314499    7562 scope.go:117] "RemoveContainer" containerID="b1bcb645e4d7b4ad6c3bb75e1172967faf7713e7ddf3b10a095d3feb04720236"
	Mar 07 21:58:58 functional-836000 kubelet[7562]: I0307 21:58:58.314601    7562 scope.go:117] "RemoveContainer" containerID="4b3363a4604a9e6cbdd8259e99fa7fe9d91fa75e22d356e896b8b531f4b5da78"
	Mar 07 21:58:58 functional-836000 kubelet[7562]: E0307 21:58:58.315994    7562 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-2htjx_default(18ac08df-803f-4ddd-94c1-c13993ed3531)\"" pod="default/hello-node-759d89bdcc-2htjx" podUID="18ac08df-803f-4ddd-94c1-c13993ed3531"
	Mar 07 21:58:59 functional-836000 kubelet[7562]: I0307 21:58:59.560966    7562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79hsf\" (UniqueName: \"kubernetes.io/projected/e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2-kube-api-access-79hsf\") pod \"e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2\" (UID: \"e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2\") "
	Mar 07 21:58:59 functional-836000 kubelet[7562]: I0307 21:58:59.561003    7562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2-test-volume\") pod \"e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2\" (UID: \"e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2\") "
	Mar 07 21:58:59 functional-836000 kubelet[7562]: I0307 21:58:59.561054    7562 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2-test-volume" (OuterVolumeSpecName: "test-volume") pod "e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2" (UID: "e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 07 21:58:59 functional-836000 kubelet[7562]: I0307 21:58:59.564166    7562 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2-kube-api-access-79hsf" (OuterVolumeSpecName: "kube-api-access-79hsf") pod "e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2" (UID: "e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2"). InnerVolumeSpecName "kube-api-access-79hsf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 21:58:59 functional-836000 kubelet[7562]: I0307 21:58:59.661644    7562 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2-test-volume\") on node \"functional-836000\" DevicePath \"\""
	Mar 07 21:58:59 functional-836000 kubelet[7562]: I0307 21:58:59.661663    7562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-79hsf\" (UniqueName: \"kubernetes.io/projected/e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2-kube-api-access-79hsf\") on node \"functional-836000\" DevicePath \"\""
	Mar 07 21:59:00 functional-836000 kubelet[7562]: I0307 21:59:00.332854    7562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2632c5b685031f404350439dcc1c02fe4247869572b8a8413d7c73b8bd8e39b1"
	Mar 07 21:59:03 functional-836000 kubelet[7562]: I0307 21:59:03.818092    7562 scope.go:117] "RemoveContainer" containerID="27ae7bcacdb2e20876c2ff4fca003a402e7f8657ab05a11d21238d1f11eabf2b"
	Mar 07 21:59:03 functional-836000 kubelet[7562]: E0307 21:59:03.818191    7562 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-gfnrd_default(ec5c5794-652a-4971-b709-4333f321052d)\"" pod="default/hello-node-connect-7799dfb7c6-gfnrd" podUID="ec5c5794-652a-4971-b709-4333f321052d"
	Mar 07 21:59:04 functional-836000 kubelet[7562]: I0307 21:59:04.341272    7562 topology_manager.go:215] "Topology Admit Handler" podUID="ad9e6364-f6cb-4b86-9b2c-9b02771baa9e" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-9mwzs"
	Mar 07 21:59:04 functional-836000 kubelet[7562]: E0307 21:59:04.341311    7562 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2" containerName="mount-munger"
	Mar 07 21:59:04 functional-836000 kubelet[7562]: I0307 21:59:04.341328    7562 memory_manager.go:346] "RemoveStaleState removing state" podUID="e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2" containerName="mount-munger"
	Mar 07 21:59:04 functional-836000 kubelet[7562]: I0307 21:59:04.362678    7562 topology_manager.go:215] "Topology Admit Handler" podUID="29d360b2-85e0-45ce-93f4-93b0e9c1e75d" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-scpwb"
	Mar 07 21:59:04 functional-836000 kubelet[7562]: I0307 21:59:04.483923    7562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnqqn\" (UniqueName: \"kubernetes.io/projected/29d360b2-85e0-45ce-93f4-93b0e9c1e75d-kube-api-access-fnqqn\") pod \"kubernetes-dashboard-8694d4445c-scpwb\" (UID: \"29d360b2-85e0-45ce-93f4-93b0e9c1e75d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-scpwb"
	Mar 07 21:59:04 functional-836000 kubelet[7562]: I0307 21:59:04.483972    7562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ad9e6364-f6cb-4b86-9b2c-9b02771baa9e-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-9mwzs\" (UID: \"ad9e6364-f6cb-4b86-9b2c-9b02771baa9e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-9mwzs"
	Mar 07 21:59:04 functional-836000 kubelet[7562]: I0307 21:59:04.483986    7562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr9qr\" (UniqueName: \"kubernetes.io/projected/ad9e6364-f6cb-4b86-9b2c-9b02771baa9e-kube-api-access-pr9qr\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-9mwzs\" (UID: \"ad9e6364-f6cb-4b86-9b2c-9b02771baa9e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-9mwzs"
	Mar 07 21:59:04 functional-836000 kubelet[7562]: I0307 21:59:04.483996    7562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/29d360b2-85e0-45ce-93f4-93b0e9c1e75d-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-scpwb\" (UID: \"29d360b2-85e0-45ce-93f4-93b0e9c1e75d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-scpwb"
	Mar 07 21:59:07 functional-836000 kubelet[7562]: I0307 21:59:07.379439    7562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-9mwzs" podStartSLOduration=1.71393632 podCreationTimestamp="2024-03-07 21:59:04 +0000 UTC" firstStartedPulling="2024-03-07 21:59:04.7768897 +0000 UTC m=+74.038608373" lastFinishedPulling="2024-03-07 21:59:06.442366034 +0000 UTC m=+75.704084665" observedRunningTime="2024-03-07 21:59:07.377531198 +0000 UTC m=+76.639249829" watchObservedRunningTime="2024-03-07 21:59:07.379412612 +0000 UTC m=+76.641131243"
	
	
	==> storage-provisioner [6bf2cc1b4b86] <==
	I0307 21:57:55.113693       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 21:57:55.120640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 21:57:55.120744       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 21:58:12.512025       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 21:58:12.512088       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-836000_b3a679e2-da4c-4c89-8ac9-f831011422cc!
	I0307 21:58:12.512272       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc56af94-8a32-473a-a767-c4f3ed94a534", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-836000_b3a679e2-da4c-4c89-8ac9-f831011422cc became leader
	I0307 21:58:12.613237       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-836000_b3a679e2-da4c-4c89-8ac9-f831011422cc!
	I0307 21:58:23.631655       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0307 21:58:23.631726       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    49e6fe59-9d53-43a2-8056-9ef788e6ff43 370 0 2024-03-07 21:56:44 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-03-07 21:56:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-c08b2c59-b984-489e-8024-a2b04a4ca4cb &PersistentVolumeClaim{ObjectMeta:{myclaim  default  c08b2c59-b984-489e-8024-a2b04a4ca4cb 675 0 2024-03-07 21:58:23 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-03-07 21:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-03-07 21:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0307 21:58:23.632141       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-c08b2c59-b984-489e-8024-a2b04a4ca4cb" provisioned
	I0307 21:58:23.632160       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0307 21:58:23.632176       1 volume_store.go:212] Trying to save persistentvolume "pvc-c08b2c59-b984-489e-8024-a2b04a4ca4cb"
	I0307 21:58:23.632714       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c08b2c59-b984-489e-8024-a2b04a4ca4cb", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0307 21:58:23.636575       1 volume_store.go:219] persistentvolume "pvc-c08b2c59-b984-489e-8024-a2b04a4ca4cb" saved
	I0307 21:58:23.636784       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c08b2c59-b984-489e-8024-a2b04a4ca4cb", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-c08b2c59-b984-489e-8024-a2b04a4ca4cb
	
	
	==> storage-provisioner [c3bf73087042] <==
	I0307 21:57:06.945775       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 21:57:06.954186       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 21:57:06.954208       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 21:57:24.345145       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 21:57:24.345242       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-836000_fc216c22-9cbd-44d2-b629-e97431740c63!
	I0307 21:57:24.345430       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc56af94-8a32-473a-a767-c4f3ed94a534", APIVersion:"v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-836000_fc216c22-9cbd-44d2-b629-e97431740c63 became leader
	I0307 21:57:24.446045       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-836000_fc216c22-9cbd-44d2-b629-e97431740c63!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-836000 -n functional-836000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-836000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-8694d4445c-scpwb
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-836000 describe pod busybox-mount kubernetes-dashboard-8694d4445c-scpwb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-836000 describe pod busybox-mount kubernetes-dashboard-8694d4445c-scpwb: exit status 1 (47.854125ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836000/192.168.105.4
	Start Time:       Thu, 07 Mar 2024 13:58:51 -0800
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://5cb93fe3154ca9beab350980f6f79fe64d1ce074dbc5f93778afb96bfa7520c0
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 07 Mar 2024 13:58:57 -0800
	      Finished:     Thu, 07 Mar 2024 13:58:57 -0800
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-79hsf (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-79hsf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  16s   default-scheduler  Successfully assigned default/busybox-mount to functional-836000
	  Normal  Pulling    16s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 5.671s (5.671s including waiting)
	  Normal  Created    10s   kubelet            Created container mount-munger
	  Normal  Started    10s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-scpwb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-836000 describe pod busybox-mount kubernetes-dashboard-8694d4445c-scpwb: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (39.67s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (312.31s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-827000 node stop m02 -v=7 --alsologtostderr: (12.195585834s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 status -v=7 --alsologtostderr
E0307 14:12:13.852628    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-827000 status -v=7 --alsologtostderr: exit status 7 (3m45.047441042s)

                                                
                                                
-- stdout --
	ha-827000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-827000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-827000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-827000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:09:13.587780    2967 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:09:13.588339    2967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:09:13.588345    2967 out.go:304] Setting ErrFile to fd 2...
	I0307 14:09:13.588347    2967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:09:13.588528    2967 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:09:13.588683    2967 out.go:298] Setting JSON to false
	I0307 14:09:13.588700    2967 mustload.go:65] Loading cluster: ha-827000
	I0307 14:09:13.588939    2967 notify.go:220] Checking for updates...
	I0307 14:09:13.589535    2967 config.go:182] Loaded profile config "ha-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:09:13.589544    2967 status.go:255] checking status of ha-827000 ...
	I0307 14:09:13.590343    2967 status.go:330] ha-827000 host status = "Running" (err=<nil>)
	I0307 14:09:13.590354    2967 host.go:66] Checking if "ha-827000" exists ...
	I0307 14:09:13.590461    2967 host.go:66] Checking if "ha-827000" exists ...
	I0307 14:09:13.590574    2967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 14:09:13.590583    2967 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000/id_rsa Username:docker}
	W0307 14:10:28.590714    2967 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0307 14:10:28.591007    2967 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 14:10:28.591019    2967 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0307 14:10:28.591024    2967 status.go:257] ha-827000 status: &{Name:ha-827000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 14:10:28.591040    2967 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0307 14:10:28.591044    2967 status.go:255] checking status of ha-827000-m02 ...
	I0307 14:10:28.591258    2967 status.go:330] ha-827000-m02 host status = "Stopped" (err=<nil>)
	I0307 14:10:28.591264    2967 status.go:343] host is not running, skipping remaining checks
	I0307 14:10:28.591266    2967 status.go:257] ha-827000-m02 status: &{Name:ha-827000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 14:10:28.591270    2967 status.go:255] checking status of ha-827000-m03 ...
	I0307 14:10:28.592058    2967 status.go:330] ha-827000-m03 host status = "Running" (err=<nil>)
	I0307 14:10:28.592066    2967 host.go:66] Checking if "ha-827000-m03" exists ...
	I0307 14:10:28.592177    2967 host.go:66] Checking if "ha-827000-m03" exists ...
	I0307 14:10:28.592292    2967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 14:10:28.592298    2967 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m03/id_rsa Username:docker}
	W0307 14:11:43.591117    2967 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0307 14:11:43.591161    2967 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0307 14:11:43.591169    2967 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0307 14:11:43.591173    2967 status.go:257] ha-827000-m03 status: &{Name:ha-827000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 14:11:43.591182    2967 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0307 14:11:43.591185    2967 status.go:255] checking status of ha-827000-m04 ...
	I0307 14:11:43.591944    2967 status.go:330] ha-827000-m04 host status = "Running" (err=<nil>)
	I0307 14:11:43.591951    2967 host.go:66] Checking if "ha-827000-m04" exists ...
	I0307 14:11:43.592039    2967 host.go:66] Checking if "ha-827000-m04" exists ...
	I0307 14:11:43.592149    2967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 14:11:43.592155    2967 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m04/id_rsa Username:docker}
	W0307 14:12:58.592159    2967 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0307 14:12:58.592297    2967 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0307 14:12:58.592323    2967 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0307 14:12:58.592340    2967 status.go:257] ha-827000-m04 status: &{Name:ha-827000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0307 14:12:58.592380    2967 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-827000 status -v=7 --alsologtostderr": ha-827000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-827000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-827000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-827000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-827000 status -v=7 --alsologtostderr": ha-827000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-827000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-827000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-827000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-827000 status -v=7 --alsologtostderr": ha-827000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-827000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-827000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-827000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-827000 -n ha-827000
E0307 14:13:17.361637    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:13:36.918472    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-827000 -n ha-827000: exit status 3 (1m15.069638334s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 14:14:13.662344    3001 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 14:14:13.662370    3001 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-827000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/StopSecondaryNode (312.31s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (227.3s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m32.251982708s)
ha_test.go:413: expected profile "ha-827000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-827000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-827000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-827000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"
\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-827000 -n ha-827000
E0307 14:17:13.845781    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-827000 -n ha-827000: exit status 3 (1m15.043258834s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 14:18:00.953868    3069 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 14:18:00.953898    3069 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-827000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (227.30s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (305.26s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-827000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.131140084s)

                                                
                                                
-- stdout --
	* Starting "ha-827000-m02" control-plane node in "ha-827000" cluster
	* Restarting existing qemu2 VM for "ha-827000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-827000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:18:01.009930    3074 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:18:01.010208    3074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:18:01.010212    3074 out.go:304] Setting ErrFile to fd 2...
	I0307 14:18:01.010215    3074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:18:01.010371    3074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:18:01.010693    3074 mustload.go:65] Loading cluster: ha-827000
	I0307 14:18:01.010993    3074 config.go:182] Loaded profile config "ha-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0307 14:18:01.011285    3074 host.go:58] "ha-827000-m02" host status: Stopped
	I0307 14:18:01.016049    3074 out.go:177] * Starting "ha-827000-m02" control-plane node in "ha-827000" cluster
	I0307 14:18:01.019083    3074 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:18:01.019100    3074 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:18:01.019110    3074 cache.go:56] Caching tarball of preloaded images
	I0307 14:18:01.019228    3074 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:18:01.019245    3074 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:18:01.019332    3074 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/ha-827000/config.json ...
	I0307 14:18:01.019790    3074 start.go:360] acquireMachinesLock for ha-827000-m02: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:18:01.019834    3074 start.go:364] duration metric: took 29.708µs to acquireMachinesLock for "ha-827000-m02"
	I0307 14:18:01.019843    3074 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:18:01.019848    3074 fix.go:54] fixHost starting: m02
	I0307 14:18:01.019992    3074 fix.go:112] recreateIfNeeded on ha-827000-m02: state=Stopped err=<nil>
	W0307 14:18:01.019998    3074 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:18:01.024036    3074 out.go:177] * Restarting existing qemu2 VM for "ha-827000-m02" ...
	I0307 14:18:01.027052    3074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:26:47:45:25:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/disk.qcow2
	I0307 14:18:01.029753    3074 main.go:141] libmachine: STDOUT: 
	I0307 14:18:01.029774    3074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:18:01.029801    3074 fix.go:56] duration metric: took 9.952125ms for fixHost
	I0307 14:18:01.029805    3074 start.go:83] releasing machines lock for "ha-827000-m02", held for 9.967083ms
	W0307 14:18:01.029811    3074 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:18:01.029843    3074 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:18:01.029848    3074 start.go:728] Will try again in 5 seconds ...
	I0307 14:18:06.031979    3074 start.go:360] acquireMachinesLock for ha-827000-m02: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:18:06.032462    3074 start.go:364] duration metric: took 404.292µs to acquireMachinesLock for "ha-827000-m02"
	I0307 14:18:06.032627    3074 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:18:06.032649    3074 fix.go:54] fixHost starting: m02
	I0307 14:18:06.033353    3074 fix.go:112] recreateIfNeeded on ha-827000-m02: state=Stopped err=<nil>
	W0307 14:18:06.033378    3074 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:18:06.037628    3074 out.go:177] * Restarting existing qemu2 VM for "ha-827000-m02" ...
	I0307 14:18:06.041722    3074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:26:47:45:25:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/disk.qcow2
	I0307 14:18:06.051277    3074 main.go:141] libmachine: STDOUT: 
	I0307 14:18:06.051353    3074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:18:06.051470    3074 fix.go:56] duration metric: took 18.801333ms for fixHost
	I0307 14:18:06.051497    3074 start.go:83] releasing machines lock for "ha-827000-m02", held for 19.01225ms
	W0307 14:18:06.051694    3074 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-827000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-827000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:18:06.056733    3074 out.go:177] 
	W0307 14:18:06.060691    3074 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:18:06.060715    3074 out.go:239] * 
	* 
	W0307 14:18:06.074602    3074 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:18:06.079516    3074 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0307 14:18:01.009930    3074 out.go:291] Setting OutFile to fd 1 ...
I0307 14:18:01.010208    3074 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 14:18:01.010212    3074 out.go:304] Setting ErrFile to fd 2...
I0307 14:18:01.010215    3074 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 14:18:01.010371    3074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
I0307 14:18:01.010693    3074 mustload.go:65] Loading cluster: ha-827000
I0307 14:18:01.010993    3074 config.go:182] Loaded profile config "ha-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
W0307 14:18:01.011285    3074 host.go:58] "ha-827000-m02" host status: Stopped
I0307 14:18:01.016049    3074 out.go:177] * Starting "ha-827000-m02" control-plane node in "ha-827000" cluster
I0307 14:18:01.019083    3074 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0307 14:18:01.019100    3074 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0307 14:18:01.019110    3074 cache.go:56] Caching tarball of preloaded images
I0307 14:18:01.019228    3074 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0307 14:18:01.019245    3074 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0307 14:18:01.019332    3074 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/ha-827000/config.json ...
I0307 14:18:01.019790    3074 start.go:360] acquireMachinesLock for ha-827000-m02: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 14:18:01.019834    3074 start.go:364] duration metric: took 29.708µs to acquireMachinesLock for "ha-827000-m02"
I0307 14:18:01.019843    3074 start.go:96] Skipping create...Using existing machine configuration
I0307 14:18:01.019848    3074 fix.go:54] fixHost starting: m02
I0307 14:18:01.019992    3074 fix.go:112] recreateIfNeeded on ha-827000-m02: state=Stopped err=<nil>
W0307 14:18:01.019998    3074 fix.go:138] unexpected machine state, will restart: <nil>
I0307 14:18:01.024036    3074 out.go:177] * Restarting existing qemu2 VM for "ha-827000-m02" ...
I0307 14:18:01.027052    3074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:26:47:45:25:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/disk.qcow2
I0307 14:18:01.029753    3074 main.go:141] libmachine: STDOUT: 
I0307 14:18:01.029774    3074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0307 14:18:01.029801    3074 fix.go:56] duration metric: took 9.952125ms for fixHost
I0307 14:18:01.029805    3074 start.go:83] releasing machines lock for "ha-827000-m02", held for 9.967083ms
W0307 14:18:01.029811    3074 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0307 14:18:01.029843    3074 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0307 14:18:01.029848    3074 start.go:728] Will try again in 5 seconds ...
I0307 14:18:06.031979    3074 start.go:360] acquireMachinesLock for ha-827000-m02: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 14:18:06.032462    3074 start.go:364] duration metric: took 404.292µs to acquireMachinesLock for "ha-827000-m02"
I0307 14:18:06.032627    3074 start.go:96] Skipping create...Using existing machine configuration
I0307 14:18:06.032649    3074 fix.go:54] fixHost starting: m02
I0307 14:18:06.033353    3074 fix.go:112] recreateIfNeeded on ha-827000-m02: state=Stopped err=<nil>
W0307 14:18:06.033378    3074 fix.go:138] unexpected machine state, will restart: <nil>
I0307 14:18:06.037628    3074 out.go:177] * Restarting existing qemu2 VM for "ha-827000-m02" ...
I0307 14:18:06.041722    3074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:26:47:45:25:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/disk.qcow2
I0307 14:18:06.051277    3074 main.go:141] libmachine: STDOUT: 
I0307 14:18:06.051353    3074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0307 14:18:06.051470    3074 fix.go:56] duration metric: took 18.801333ms for fixHost
I0307 14:18:06.051497    3074 start.go:83] releasing machines lock for "ha-827000-m02", held for 19.01225ms
W0307 14:18:06.051694    3074 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-827000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-827000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0307 14:18:06.056733    3074 out.go:177] 
W0307 14:18:06.060691    3074 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0307 14:18:06.060715    3074 out.go:239] * 
* 
W0307 14:18:06.074602    3074 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0307 14:18:06.079516    3074 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-827000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 status -v=7 --alsologtostderr
E0307 14:18:17.354939    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:19:40.419621    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-827000 status -v=7 --alsologtostderr: exit status 7 (3m45.081891833s)

                                                
                                                
-- stdout --
	ha-827000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-827000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-827000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-827000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:18:06.150660    3078 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:18:06.150852    3078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:18:06.150855    3078 out.go:304] Setting ErrFile to fd 2...
	I0307 14:18:06.150858    3078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:18:06.151056    3078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:18:06.151211    3078 out.go:298] Setting JSON to false
	I0307 14:18:06.151226    3078 mustload.go:65] Loading cluster: ha-827000
	I0307 14:18:06.151266    3078 notify.go:220] Checking for updates...
	I0307 14:18:06.151508    3078 config.go:182] Loaded profile config "ha-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:18:06.151515    3078 status.go:255] checking status of ha-827000 ...
	I0307 14:18:06.152526    3078 status.go:330] ha-827000 host status = "Running" (err=<nil>)
	I0307 14:18:06.152542    3078 host.go:66] Checking if "ha-827000" exists ...
	I0307 14:18:06.152695    3078 host.go:66] Checking if "ha-827000" exists ...
	I0307 14:18:06.152832    3078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 14:18:06.152842    3078 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000/id_rsa Username:docker}
	W0307 14:19:21.153309    3078 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0307 14:19:21.156134    3078 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 14:19:21.156199    3078 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0307 14:19:21.156274    3078 status.go:257] ha-827000 status: &{Name:ha-827000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 14:19:21.156356    3078 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0307 14:19:21.156383    3078 status.go:255] checking status of ha-827000-m02 ...
	I0307 14:19:21.157381    3078 status.go:330] ha-827000-m02 host status = "Stopped" (err=<nil>)
	I0307 14:19:21.157402    3078 status.go:343] host is not running, skipping remaining checks
	I0307 14:19:21.157413    3078 status.go:257] ha-827000-m02 status: &{Name:ha-827000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 14:19:21.157434    3078 status.go:255] checking status of ha-827000-m03 ...
	I0307 14:19:21.160297    3078 status.go:330] ha-827000-m03 host status = "Running" (err=<nil>)
	I0307 14:19:21.160321    3078 host.go:66] Checking if "ha-827000-m03" exists ...
	I0307 14:19:21.160820    3078 host.go:66] Checking if "ha-827000-m03" exists ...
	I0307 14:19:21.161465    3078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 14:19:21.161500    3078 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m03/id_rsa Username:docker}
	W0307 14:20:36.162268    3078 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0307 14:20:36.162313    3078 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0307 14:20:36.162320    3078 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0307 14:20:36.162324    3078 status.go:257] ha-827000-m03 status: &{Name:ha-827000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 14:20:36.162332    3078 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0307 14:20:36.162338    3078 status.go:255] checking status of ha-827000-m04 ...
	I0307 14:20:36.163093    3078 status.go:330] ha-827000-m04 host status = "Running" (err=<nil>)
	I0307 14:20:36.163101    3078 host.go:66] Checking if "ha-827000-m04" exists ...
	I0307 14:20:36.163216    3078 host.go:66] Checking if "ha-827000-m04" exists ...
	I0307 14:20:36.163326    3078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 14:20:36.163333    3078 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m04/id_rsa Username:docker}
	W0307 14:21:51.164187    3078 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0307 14:21:51.164247    3078 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0307 14:21:51.164256    3078 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0307 14:21:51.164259    3078 status.go:257] ha-827000-m04 status: &{Name:ha-827000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0307 14:21:51.164269    3078 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-827000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-827000 -n ha-827000
E0307 14:22:13.838776    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-827000 -n ha-827000: exit status 3 (1m15.048008584s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 14:23:06.206785    3124 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 14:23:06.206835    3124 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-827000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/RestartSecondaryNode (305.26s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (341.44s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-827000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-827000 -v=7 --alsologtostderr
E0307 14:27:13.935412    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 14:28:17.445879    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-darwin-arm64 stop -p ha-827000 -v=7 --alsologtostderr: signal: killed (4m26.322872083s)

                                                
                                                
-- stdout --
	* Stopping node "ha-827000-m04"  ...
	* Stopping node "ha-827000-m03"  ...
	* Stopping node "ha-827000-m02"  ...
	* Stopping node "ha-827000"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:25:38.462107    3184 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:25:38.462479    3184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:25:38.462485    3184 out.go:304] Setting ErrFile to fd 2...
	I0307 14:25:38.462488    3184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:25:38.462642    3184 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:25:38.462863    3184 out.go:298] Setting JSON to false
	I0307 14:25:38.463545    3184 mustload.go:65] Loading cluster: ha-827000
	I0307 14:25:38.463787    3184 config.go:182] Loaded profile config "ha-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:25:38.463862    3184 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/ha-827000/config.json ...
	I0307 14:25:38.464133    3184 mustload.go:65] Loading cluster: ha-827000
	I0307 14:25:38.464217    3184 config.go:182] Loaded profile config "ha-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:25:38.464235    3184 stop.go:39] StopHost: ha-827000-m04
	I0307 14:25:38.468413    3184 out.go:177] * Stopping node "ha-827000-m04"  ...
	I0307 14:25:38.476341    3184 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0307 14:25:38.476372    3184 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0307 14:25:38.476380    3184 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m04/id_rsa Username:docker}
	W0307 14:26:53.478455    3184 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0307 14:26:53.478542    3184 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0307 14:26:53.478586    3184 main.go:141] libmachine: Stopping "ha-827000-m04"...
	I0307 14:26:56.482489    3184 main.go:141] libmachine: Machine "ha-827000-m04" was stopped.
	I0307 14:26:56.482506    3184 stop.go:75] duration metric: took 1m18.006018125s to stop
	I0307 14:26:56.482522    3184 stop.go:39] StopHost: ha-827000-m03
	I0307 14:26:56.489851    3184 out.go:177] * Stopping node "ha-827000-m03"  ...
	I0307 14:26:56.493804    3184 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0307 14:26:56.493835    3184 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0307 14:26:56.493842    3184 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m03/id_rsa Username:docker}
	W0307 14:28:11.496244    3184 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0307 14:28:11.496303    3184 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0307 14:28:11.496321    3184 main.go:141] libmachine: Stopping "ha-827000-m03"...
	I0307 14:28:23.507406    3184 main.go:141] libmachine: Machine "ha-827000-m03" was stopped.
	I0307 14:28:23.507418    3184 stop.go:75] duration metric: took 1m27.013446458s to stop
	I0307 14:28:23.507427    3184 stop.go:39] StopHost: ha-827000-m02
	I0307 14:28:23.510612    3184 out.go:177] * Stopping node "ha-827000-m02"  ...
	I0307 14:28:23.518672    3184 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0307 14:28:23.518716    3184 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0307 14:28:23.518725    3184 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000-m02/id_rsa Username:docker}
	W0307 14:29:38.520891    3184 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.6:22: connect: operation timed out
	W0307 14:29:38.521155    3184 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.6:22: connect: operation timed out
	I0307 14:29:38.521291    3184 main.go:141] libmachine: Stopping "ha-827000-m02"...
	I0307 14:29:38.521470    3184 stop.go:66] stop err: Machine "ha-827000-m02" is already stopped.
	I0307 14:29:38.521501    3184 stop.go:69] host is already stopped
	I0307 14:29:38.521523    3184 stop.go:39] StopHost: ha-827000
	I0307 14:29:38.526808    3184 out.go:177] * Stopping node "ha-827000"  ...
	I0307 14:29:38.535672    3184 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0307 14:29:38.536083    3184 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0307 14:29:38.536117    3184 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/ha-827000/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-darwin-arm64 node list -p ha-827000 -v=7 --alsologtostderr" : signal: killed
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-827000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-827000 --wait=true -v=7 --alsologtostderr: context deadline exceeded (2.5µs)
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-827000 -v=7 --alsologtostderr" : context deadline exceeded
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-827000
ha_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 node list -p ha-827000: context deadline exceeded (292ns)
ha_test.go:474: failed to run node list. args "out/minikube-darwin-arm64 node list -p ha-827000" : context deadline exceeded
ha_test.go:479: reported node list is not the same after restart. Before restart: ha-827000	192.168.105.5
ha-827000-m02	192.168.105.6
ha-827000-m03	192.168.105.7
ha-827000-m04	192.168.105.8

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-827000 -n ha-827000
E0307 14:30:17.006968    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-827000 -n ha-827000: exit status 3 (1m15.080819208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 14:31:19.829529    3243 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 14:31:19.829572    3243 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-827000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/RestartClusterKeepsNodes (341.44s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-134000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-134000 --driver=qemu2 : exit status 80 (9.838120958s)

                                                
                                                
-- stdout --
	* [image-134000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-134000" primary control-plane node in "image-134000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-134000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-134000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-134000 -n image-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-134000 -n image-134000: exit status 7 (69.290125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-134000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.77s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-328000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-328000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.764632875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e8180699-b183-4505-8cf5-1c521a60e7f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-328000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f6fc39f-a07e-4dfe-bfa9-7cb5f06be936","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18320"}}
	{"specversion":"1.0","id":"874dcd84-b62a-4907-bb8e-4b23197cc716","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig"}}
	{"specversion":"1.0","id":"17c1acd4-7a53-4a2a-bf24-0a84b483ca8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"0904a0c8-38cb-4bc4-823a-1aca61eb8d20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"213c2868-29dc-43fb-8a52-c968adfca14d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube"}}
	{"specversion":"1.0","id":"aa08d069-1261-419e-969f-6d848f7cf415","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4297ac6e-b5a7-4f25-abd9-acaa4eab06ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6651696-dc0b-47c8-9997-a87d80c700aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"42998af1-329f-469d-8dea-11c8e2976a54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-328000\" primary control-plane node in \"json-output-328000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9be05bd-636d-4323-907d-fda91f3764b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"8cc20751-4b58-44cb-acf6-cb3163d9f160","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-328000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"8cb1ac9d-624b-476c-868d-4473abfbb5cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"10a8affe-8d8d-484c-ac34-b37d6f699fe2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"195beb62-a9c7-4ca6-8392-6b9d07306e37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-328000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"c9c3eccd-00c0-446c-8469-d5eb59bb76cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"ca2e83a2-c904-4cb6-91af-1566aa8077e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-328000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-328000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-328000 --output=json --user=testUser: exit status 83 (80.504625ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"687b268a-0936-4efb-aaa5-021bbe285267","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-328000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"2b2766f6-c7b5-4c6b-9e8a-0156cc451132","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-328000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-328000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-328000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-328000 --output=json --user=testUser: exit status 83 (44.273084ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-328000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-328000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-328000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-328000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-106000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-106000 --driver=qemu2 : exit status 80 (9.827080292s)

                                                
                                                
-- stdout --
	* [first-106000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-106000" primary control-plane node in "first-106000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-106000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-106000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-106000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-07 14:33:14.523843 -0800 PST m=+2807.019823001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-108000 -n second-108000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-108000 -n second-108000: exit status 85 (85.666875ms)

                                                
                                                
-- stdout --
	* Profile "second-108000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-108000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-108000" host is not running, skipping log retrieval (state="* Profile \"second-108000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-108000\"")
helpers_test.go:175: Cleaning up "second-108000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-108000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-07 14:33:14.842018 -0800 PST m=+2807.337997418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-106000 -n first-106000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-106000 -n first-106000: exit status 7 (32.044666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-106000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-106000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-106000
--- FAIL: TestMinikubeProfile (10.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-043000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E0307 14:33:17.447626    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-043000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.49571275s)

                                                
                                                
-- stdout --
	* [mount-start-1-043000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-043000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-043000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-043000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-043000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-043000 -n mount-start-1-043000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-043000 -n mount-start-1-043000: exit status 7 (68.200625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-043000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.56s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-172000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-172000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.808816209s)

                                                
                                                
-- stdout --
	* [multinode-172000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-172000" primary control-plane node in "multinode-172000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-172000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:33:25.898400    3434 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:33:25.898570    3434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:33:25.898576    3434 out.go:304] Setting ErrFile to fd 2...
	I0307 14:33:25.898578    3434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:33:25.898875    3434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:33:25.900179    3434 out.go:298] Setting JSON to false
	I0307 14:33:25.916598    3434 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3762,"bootTime":1709847043,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:33:25.916664    3434 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:33:25.922372    3434 out.go:177] * [multinode-172000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:33:25.929318    3434 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:33:25.933328    3434 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:33:25.929353    3434 notify.go:220] Checking for updates...
	I0307 14:33:25.939244    3434 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:33:25.942292    3434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:33:25.945213    3434 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:33:25.948256    3434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:33:25.951440    3434 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:33:25.954238    3434 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:33:25.961263    3434 start.go:297] selected driver: qemu2
	I0307 14:33:25.961268    3434 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:33:25.961273    3434 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:33:25.963472    3434 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:33:25.964985    3434 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:33:25.968354    3434 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:33:25.968405    3434 cni.go:84] Creating CNI manager for ""
	I0307 14:33:25.968411    3434 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0307 14:33:25.968415    3434 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 14:33:25.968446    3434 start.go:340] cluster config:
	{Name:multinode-172000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-172000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:33:25.972856    3434 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:33:25.980291    3434 out.go:177] * Starting "multinode-172000" primary control-plane node in "multinode-172000" cluster
	I0307 14:33:25.984256    3434 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:33:25.984283    3434 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:33:25.984293    3434 cache.go:56] Caching tarball of preloaded images
	I0307 14:33:25.984348    3434 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:33:25.984354    3434 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:33:25.984579    3434 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/multinode-172000/config.json ...
	I0307 14:33:25.984591    3434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/multinode-172000/config.json: {Name:mk8b7d9a7660e6e07e56e363562f3bf4e25a4ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:33:25.984819    3434 start.go:360] acquireMachinesLock for multinode-172000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:33:25.984853    3434 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "multinode-172000"
	I0307 14:33:25.984864    3434 start.go:93] Provisioning new machine with config: &{Name:multinode-172000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-172000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:33:25.984896    3434 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:33:25.993272    3434 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:33:26.011775    3434 start.go:159] libmachine.API.Create for "multinode-172000" (driver="qemu2")
	I0307 14:33:26.011808    3434 client.go:168] LocalClient.Create starting
	I0307 14:33:26.011886    3434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:33:26.011914    3434 main.go:141] libmachine: Decoding PEM data...
	I0307 14:33:26.011925    3434 main.go:141] libmachine: Parsing certificate...
	I0307 14:33:26.011970    3434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:33:26.011992    3434 main.go:141] libmachine: Decoding PEM data...
	I0307 14:33:26.011998    3434 main.go:141] libmachine: Parsing certificate...
	I0307 14:33:26.012367    3434 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:33:26.149548    3434 main.go:141] libmachine: Creating SSH key...
	I0307 14:33:26.269630    3434 main.go:141] libmachine: Creating Disk image...
	I0307 14:33:26.269639    3434 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:33:26.269832    3434 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2
	I0307 14:33:26.282493    3434 main.go:141] libmachine: STDOUT: 
	I0307 14:33:26.282514    3434 main.go:141] libmachine: STDERR: 
	I0307 14:33:26.282576    3434 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2 +20000M
	I0307 14:33:26.293331    3434 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:33:26.293351    3434 main.go:141] libmachine: STDERR: 
	I0307 14:33:26.293364    3434 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2
	I0307 14:33:26.293371    3434 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:33:26.293407    3434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:8a:6e:f9:0e:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2
	I0307 14:33:26.295115    3434 main.go:141] libmachine: STDOUT: 
	I0307 14:33:26.295130    3434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:33:26.295148    3434 client.go:171] duration metric: took 283.333ms to LocalClient.Create
	I0307 14:33:28.297346    3434 start.go:128] duration metric: took 2.312425333s to createHost
	I0307 14:33:28.297439    3434 start.go:83] releasing machines lock for "multinode-172000", held for 2.312552125s
	W0307 14:33:28.297509    3434 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:33:28.305620    3434 out.go:177] * Deleting "multinode-172000" in qemu2 ...
	W0307 14:33:28.331262    3434 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:33:28.331303    3434 start.go:728] Will try again in 5 seconds ...
	I0307 14:33:33.333477    3434 start.go:360] acquireMachinesLock for multinode-172000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:33:33.333906    3434 start.go:364] duration metric: took 328.25µs to acquireMachinesLock for "multinode-172000"
	I0307 14:33:33.334025    3434 start.go:93] Provisioning new machine with config: &{Name:multinode-172000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-172000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:33:33.334316    3434 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:33:33.343977    3434 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:33:33.393784    3434 start.go:159] libmachine.API.Create for "multinode-172000" (driver="qemu2")
	I0307 14:33:33.393831    3434 client.go:168] LocalClient.Create starting
	I0307 14:33:33.393942    3434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:33:33.394009    3434 main.go:141] libmachine: Decoding PEM data...
	I0307 14:33:33.394027    3434 main.go:141] libmachine: Parsing certificate...
	I0307 14:33:33.394089    3434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:33:33.394130    3434 main.go:141] libmachine: Decoding PEM data...
	I0307 14:33:33.394140    3434 main.go:141] libmachine: Parsing certificate...
	I0307 14:33:33.394635    3434 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:33:33.543650    3434 main.go:141] libmachine: Creating SSH key...
	I0307 14:33:33.610779    3434 main.go:141] libmachine: Creating Disk image...
	I0307 14:33:33.610784    3434 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:33:33.610958    3434 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2
	I0307 14:33:33.623071    3434 main.go:141] libmachine: STDOUT: 
	I0307 14:33:33.623090    3434 main.go:141] libmachine: STDERR: 
	I0307 14:33:33.623155    3434 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2 +20000M
	I0307 14:33:33.633989    3434 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:33:33.634005    3434 main.go:141] libmachine: STDERR: 
	I0307 14:33:33.634016    3434 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2
	I0307 14:33:33.634021    3434 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:33:33.634066    3434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c0:e3:0f:13:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2
	I0307 14:33:33.635830    3434 main.go:141] libmachine: STDOUT: 
	I0307 14:33:33.635848    3434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:33:33.635862    3434 client.go:171] duration metric: took 242.026708ms to LocalClient.Create
	I0307 14:33:35.638138    3434 start.go:128] duration metric: took 2.3037555s to createHost
	I0307 14:33:35.638225    3434 start.go:83] releasing machines lock for "multinode-172000", held for 2.304290625s
	W0307 14:33:35.638674    3434 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-172000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-172000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:33:35.646334    3434 out.go:177] 
	W0307 14:33:35.652301    3434 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:33:35.652349    3434 out.go:239] * 
	* 
	W0307 14:33:35.654465    3434 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:33:35.663153    3434 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-172000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (68.344333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (97.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (131.670375ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-172000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- rollout status deployment/busybox: exit status 1 (58.96ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.225416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.048458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.357041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.622125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.506584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.197458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.784375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.548417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.881084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.14175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.1765ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.990125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.185166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.131209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (31.543833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (97.30s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-172000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.339458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (32.202416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-172000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-172000 -v 3 --alsologtostderr: exit status 83 (44.364625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-172000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-172000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:35:13.171150    3521 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:35:13.171510    3521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:13.171513    3521 out.go:304] Setting ErrFile to fd 2...
	I0307 14:35:13.171515    3521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:13.171671    3521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:35:13.171915    3521 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:35:13.172101    3521 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:35:13.177534    3521 out.go:177] * The control-plane node multinode-172000 host is not running: state=Stopped
	I0307 14:35:13.180564    3521 out.go:177]   To start a cluster, run: "minikube start -p multinode-172000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-172000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (31.83675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-172000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-172000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.438084ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-172000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-172000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-172000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (31.943125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-172000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-172000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-172000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-172000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (31.261125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status --output json --alsologtostderr: exit status 7 (31.6245ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-172000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:35:13.412592    3534 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:35:13.412778    3534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:13.412781    3534 out.go:304] Setting ErrFile to fd 2...
	I0307 14:35:13.412783    3534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:13.412907    3534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:35:13.413033    3534 out.go:298] Setting JSON to true
	I0307 14:35:13.413046    3534 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:35:13.413110    3534 notify.go:220] Checking for updates...
	I0307 14:35:13.413243    3534 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:35:13.413249    3534 status.go:255] checking status of multinode-172000 ...
	I0307 14:35:13.413474    3534 status.go:330] multinode-172000 host status = "Stopped" (err=<nil>)
	I0307 14:35:13.413477    3534 status.go:343] host is not running, skipping remaining checks
	I0307 14:35:13.413480    3534 status.go:257] multinode-172000 status: &{Name:multinode-172000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-172000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (31.234583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 node stop m03: exit status 85 (46.698584ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-172000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status: exit status 7 (31.585458ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status --alsologtostderr: exit status 7 (31.685041ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:35:13.554714    3542 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:35:13.554877    3542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:13.554880    3542 out.go:304] Setting ErrFile to fd 2...
	I0307 14:35:13.554882    3542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:13.555019    3542 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:35:13.555137    3542 out.go:298] Setting JSON to false
	I0307 14:35:13.555155    3542 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:35:13.555204    3542 notify.go:220] Checking for updates...
	I0307 14:35:13.555368    3542 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:35:13.555376    3542 status.go:255] checking status of multinode-172000 ...
	I0307 14:35:13.555589    3542 status.go:330] multinode-172000 host status = "Stopped" (err=<nil>)
	I0307 14:35:13.555592    3542 status.go:343] host is not running, skipping remaining checks
	I0307 14:35:13.555595    3542 status.go:257] multinode-172000 status: &{Name:multinode-172000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-172000 status --alsologtostderr": multinode-172000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (31.585708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (49.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.754208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:35:13.618330    3546 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:35:13.618535    3546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:13.618538    3546 out.go:304] Setting ErrFile to fd 2...
	I0307 14:35:13.618541    3546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:13.618666    3546 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:35:13.618931    3546 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:35:13.619120    3546 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:35:13.622291    3546 out.go:177] 
	W0307 14:35:13.625201    3546 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0307 14:35:13.625213    3546 out.go:239] * 
	* 
	W0307 14:35:13.626802    3546 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:35:13.630224    3546 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0307 14:35:13.618330    3546 out.go:291] Setting OutFile to fd 1 ...
I0307 14:35:13.618535    3546 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 14:35:13.618538    3546 out.go:304] Setting ErrFile to fd 2...
I0307 14:35:13.618541    3546 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 14:35:13.618666    3546 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
I0307 14:35:13.618931    3546 mustload.go:65] Loading cluster: multinode-172000
I0307 14:35:13.619120    3546 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 14:35:13.622291    3546 out.go:177] 
W0307 14:35:13.625201    3546 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0307 14:35:13.625213    3546 out.go:239] * 
* 
W0307 14:35:13.626802    3546 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0307 14:35:13.630224    3546 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-172000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr: exit status 7 (31.400416ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:35:13.663928    3548 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:35:13.664089    3548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:13.664092    3548 out.go:304] Setting ErrFile to fd 2...
	I0307 14:35:13.664094    3548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:13.664234    3548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:35:13.664358    3548 out.go:298] Setting JSON to false
	I0307 14:35:13.664370    3548 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:35:13.664427    3548 notify.go:220] Checking for updates...
	I0307 14:35:13.664569    3548 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:35:13.664576    3548 status.go:255] checking status of multinode-172000 ...
	I0307 14:35:13.664773    3548 status.go:330] multinode-172000 host status = "Stopped" (err=<nil>)
	I0307 14:35:13.664776    3548 status.go:343] host is not running, skipping remaining checks
	I0307 14:35:13.664778    3548 status.go:257] multinode-172000 status: &{Name:multinode-172000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr: exit status 7 (77.971666ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:35:14.958976    3550 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:35:14.959150    3550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:14.959155    3550 out.go:304] Setting ErrFile to fd 2...
	I0307 14:35:14.959158    3550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:14.959345    3550 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:35:14.959517    3550 out.go:298] Setting JSON to false
	I0307 14:35:14.959534    3550 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:35:14.959576    3550 notify.go:220] Checking for updates...
	I0307 14:35:14.959806    3550 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:35:14.959814    3550 status.go:255] checking status of multinode-172000 ...
	I0307 14:35:14.960084    3550 status.go:330] multinode-172000 host status = "Stopped" (err=<nil>)
	I0307 14:35:14.960089    3550 status.go:343] host is not running, skipping remaining checks
	I0307 14:35:14.960092    3550 status.go:257] multinode-172000 status: &{Name:multinode-172000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr: exit status 7 (73.662041ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:35:16.816054    3552 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:35:16.816285    3552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:16.816290    3552 out.go:304] Setting ErrFile to fd 2...
	I0307 14:35:16.816293    3552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:16.816454    3552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:35:16.816627    3552 out.go:298] Setting JSON to false
	I0307 14:35:16.816643    3552 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:35:16.816681    3552 notify.go:220] Checking for updates...
	I0307 14:35:16.816892    3552 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:35:16.816900    3552 status.go:255] checking status of multinode-172000 ...
	I0307 14:35:16.817181    3552 status.go:330] multinode-172000 host status = "Stopped" (err=<nil>)
	I0307 14:35:16.817186    3552 status.go:343] host is not running, skipping remaining checks
	I0307 14:35:16.817189    3552 status.go:257] multinode-172000 status: &{Name:multinode-172000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr: exit status 7 (76.503209ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:35:18.925601    3554 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:35:18.925757    3554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:18.925761    3554 out.go:304] Setting ErrFile to fd 2...
	I0307 14:35:18.925764    3554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:18.925954    3554 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:35:18.926109    3554 out.go:298] Setting JSON to false
	I0307 14:35:18.926125    3554 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:35:18.926162    3554 notify.go:220] Checking for updates...
	I0307 14:35:18.926373    3554 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:35:18.926380    3554 status.go:255] checking status of multinode-172000 ...
	I0307 14:35:18.926640    3554 status.go:330] multinode-172000 host status = "Stopped" (err=<nil>)
	I0307 14:35:18.926645    3554 status.go:343] host is not running, skipping remaining checks
	I0307 14:35:18.926648    3554 status.go:257] multinode-172000 status: &{Name:multinode-172000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr: exit status 7 (78.047625ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:35:21.638069    3556 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:35:21.638246    3556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:21.638251    3556 out.go:304] Setting ErrFile to fd 2...
	I0307 14:35:21.638254    3556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:21.638419    3556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:35:21.638585    3556 out.go:298] Setting JSON to false
	I0307 14:35:21.638600    3556 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:35:21.638627    3556 notify.go:220] Checking for updates...
	I0307 14:35:21.638867    3556 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:35:21.638874    3556 status.go:255] checking status of multinode-172000 ...
	I0307 14:35:21.639169    3556 status.go:330] multinode-172000 host status = "Stopped" (err=<nil>)
	I0307 14:35:21.639174    3556 status.go:343] host is not running, skipping remaining checks
	I0307 14:35:21.639177    3556 status.go:257] multinode-172000 status: &{Name:multinode-172000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr: exit status 7 (76.687834ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:35:26.286376    3558 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:35:26.286557    3558 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:26.286561    3558 out.go:304] Setting ErrFile to fd 2...
	I0307 14:35:26.286564    3558 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:26.286711    3558 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:35:26.286885    3558 out.go:298] Setting JSON to false
	I0307 14:35:26.286900    3558 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:35:26.286943    3558 notify.go:220] Checking for updates...
	I0307 14:35:26.287200    3558 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:35:26.287210    3558 status.go:255] checking status of multinode-172000 ...
	I0307 14:35:26.287481    3558 status.go:330] multinode-172000 host status = "Stopped" (err=<nil>)
	I0307 14:35:26.287487    3558 status.go:343] host is not running, skipping remaining checks
	I0307 14:35:26.287490    3558 status.go:257] multinode-172000 status: &{Name:multinode-172000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr: exit status 7 (75.899125ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:35:33.493125    3560 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:35:33.493294    3560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:33.493299    3560 out.go:304] Setting ErrFile to fd 2...
	I0307 14:35:33.493301    3560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:33.493464    3560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:35:33.493620    3560 out.go:298] Setting JSON to false
	I0307 14:35:33.493634    3560 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:35:33.493670    3560 notify.go:220] Checking for updates...
	I0307 14:35:33.493889    3560 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:35:33.493897    3560 status.go:255] checking status of multinode-172000 ...
	I0307 14:35:33.494171    3560 status.go:330] multinode-172000 host status = "Stopped" (err=<nil>)
	I0307 14:35:33.494176    3560 status.go:343] host is not running, skipping remaining checks
	I0307 14:35:33.494179    3560 status.go:257] multinode-172000 status: &{Name:multinode-172000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr: exit status 7 (74.998791ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:35:46.914311    3562 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:35:46.914493    3562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:46.914498    3562 out.go:304] Setting ErrFile to fd 2...
	I0307 14:35:46.914501    3562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:35:46.914662    3562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:35:46.914822    3562 out.go:298] Setting JSON to false
	I0307 14:35:46.914838    3562 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:35:46.914875    3562 notify.go:220] Checking for updates...
	I0307 14:35:46.915072    3562 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:35:46.915079    3562 status.go:255] checking status of multinode-172000 ...
	I0307 14:35:46.915345    3562 status.go:330] multinode-172000 host status = "Stopped" (err=<nil>)
	I0307 14:35:46.915350    3562 status.go:343] host is not running, skipping remaining checks
	I0307 14:35:46.915353    3562 status.go:257] multinode-172000 status: &{Name:multinode-172000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr: exit status 7 (77.081833ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:36:02.804343    3568 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:36:02.804554    3568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:36:02.804558    3568 out.go:304] Setting ErrFile to fd 2...
	I0307 14:36:02.804561    3568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:36:02.804727    3568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:36:02.804906    3568 out.go:298] Setting JSON to false
	I0307 14:36:02.804923    3568 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:36:02.804947    3568 notify.go:220] Checking for updates...
	I0307 14:36:02.805201    3568 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:36:02.805209    3568 status.go:255] checking status of multinode-172000 ...
	I0307 14:36:02.805502    3568 status.go:330] multinode-172000 host status = "Stopped" (err=<nil>)
	I0307 14:36:02.805507    3568 status.go:343] host is not running, skipping remaining checks
	I0307 14:36:02.805510    3568 status.go:257] multinode-172000 status: &{Name:multinode-172000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-172000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (34.964541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (49.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-172000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-172000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-172000: (3.717384333s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-172000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-172000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.216200042s)

                                                
                                                
-- stdout --
	* [multinode-172000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-172000" primary control-plane node in "multinode-172000" cluster
	* Restarting existing qemu2 VM for "multinode-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:36:06.656003    3592 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:36:06.656155    3592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:36:06.656162    3592 out.go:304] Setting ErrFile to fd 2...
	I0307 14:36:06.656166    3592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:36:06.656370    3592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:36:06.657606    3592 out.go:298] Setting JSON to false
	I0307 14:36:06.676514    3592 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3923,"bootTime":1709847043,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:36:06.676579    3592 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:36:06.681654    3592 out.go:177] * [multinode-172000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:36:06.684624    3592 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:36:06.688538    3592 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:36:06.684669    3592 notify.go:220] Checking for updates...
	I0307 14:36:06.692419    3592 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:36:06.695536    3592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:36:06.698537    3592 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:36:06.701591    3592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:36:06.704938    3592 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:36:06.705009    3592 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:36:06.709538    3592 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 14:36:06.716477    3592 start.go:297] selected driver: qemu2
	I0307 14:36:06.716482    3592 start.go:901] validating driver "qemu2" against &{Name:multinode-172000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-172000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:36:06.716541    3592 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:36:06.718961    3592 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:36:06.719011    3592 cni.go:84] Creating CNI manager for ""
	I0307 14:36:06.719016    3592 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0307 14:36:06.719071    3592 start.go:340] cluster config:
	{Name:multinode-172000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-172000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:36:06.723644    3592 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:36:06.729495    3592 out.go:177] * Starting "multinode-172000" primary control-plane node in "multinode-172000" cluster
	I0307 14:36:06.733485    3592 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:36:06.733500    3592 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:36:06.733508    3592 cache.go:56] Caching tarball of preloaded images
	I0307 14:36:06.733562    3592 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:36:06.733567    3592 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:36:06.733628    3592 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/multinode-172000/config.json ...
	I0307 14:36:06.734136    3592 start.go:360] acquireMachinesLock for multinode-172000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:36:06.734171    3592 start.go:364] duration metric: took 28.166µs to acquireMachinesLock for "multinode-172000"
	I0307 14:36:06.734180    3592 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:36:06.734184    3592 fix.go:54] fixHost starting: 
	I0307 14:36:06.734314    3592 fix.go:112] recreateIfNeeded on multinode-172000: state=Stopped err=<nil>
	W0307 14:36:06.734323    3592 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:36:06.741528    3592 out.go:177] * Restarting existing qemu2 VM for "multinode-172000" ...
	I0307 14:36:06.745566    3592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c0:e3:0f:13:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2
	I0307 14:36:06.747668    3592 main.go:141] libmachine: STDOUT: 
	I0307 14:36:06.747691    3592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:36:06.747723    3592 fix.go:56] duration metric: took 13.535666ms for fixHost
	I0307 14:36:06.747727    3592 start.go:83] releasing machines lock for "multinode-172000", held for 13.551167ms
	W0307 14:36:06.747734    3592 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:36:06.747777    3592 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:36:06.747783    3592 start.go:728] Will try again in 5 seconds ...
	I0307 14:36:11.749806    3592 start.go:360] acquireMachinesLock for multinode-172000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:36:11.750256    3592 start.go:364] duration metric: took 335.125µs to acquireMachinesLock for "multinode-172000"
	I0307 14:36:11.750373    3592 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:36:11.750397    3592 fix.go:54] fixHost starting: 
	I0307 14:36:11.751078    3592 fix.go:112] recreateIfNeeded on multinode-172000: state=Stopped err=<nil>
	W0307 14:36:11.751104    3592 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:36:11.756449    3592 out.go:177] * Restarting existing qemu2 VM for "multinode-172000" ...
	I0307 14:36:11.761570    3592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c0:e3:0f:13:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2
	I0307 14:36:11.771514    3592 main.go:141] libmachine: STDOUT: 
	I0307 14:36:11.771641    3592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:36:11.771733    3592 fix.go:56] duration metric: took 21.334125ms for fixHost
	I0307 14:36:11.771750    3592 start.go:83] releasing machines lock for "multinode-172000", held for 21.47125ms
	W0307 14:36:11.772126    3592 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-172000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-172000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:36:11.778175    3592 out.go:177] 
	W0307 14:36:11.782448    3592 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:36:11.782474    3592 out.go:239] * 
	* 
	W0307 14:36:11.784979    3592 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:36:11.793415    3592 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-172000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-172000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (33.695708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 node delete m03: exit status 83 (41.550708ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-172000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-172000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-172000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status --alsologtostderr: exit status 7 (31.918667ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:36:11.983960    3606 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:36:11.984109    3606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:36:11.984113    3606 out.go:304] Setting ErrFile to fd 2...
	I0307 14:36:11.984115    3606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:36:11.984251    3606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:36:11.984377    3606 out.go:298] Setting JSON to false
	I0307 14:36:11.984388    3606 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:36:11.984439    3606 notify.go:220] Checking for updates...
	I0307 14:36:11.984589    3606 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:36:11.984595    3606 status.go:255] checking status of multinode-172000 ...
	I0307 14:36:11.984797    3606 status.go:330] multinode-172000 host status = "Stopped" (err=<nil>)
	I0307 14:36:11.984801    3606 status.go:343] host is not running, skipping remaining checks
	I0307 14:36:11.984803    3606 status.go:257] multinode-172000 status: &{Name:multinode-172000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-172000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (31.673292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (1.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-172000 stop: (1.858111875s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status: exit status 7 (65.839583ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-172000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-172000 status --alsologtostderr: exit status 7 (33.665708ms)

                                                
                                                
-- stdout --
	multinode-172000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:36:13.973897    3626 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:36:13.974047    3626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:36:13.974050    3626 out.go:304] Setting ErrFile to fd 2...
	I0307 14:36:13.974053    3626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:36:13.974178    3626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:36:13.974303    3626 out.go:298] Setting JSON to false
	I0307 14:36:13.974314    3626 mustload.go:65] Loading cluster: multinode-172000
	I0307 14:36:13.974380    3626 notify.go:220] Checking for updates...
	I0307 14:36:13.974499    3626 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:36:13.974504    3626 status.go:255] checking status of multinode-172000 ...
	I0307 14:36:13.974738    3626 status.go:330] multinode-172000 host status = "Stopped" (err=<nil>)
	I0307 14:36:13.974743    3626 status.go:343] host is not running, skipping remaining checks
	I0307 14:36:13.974746    3626 status.go:257] multinode-172000 status: &{Name:multinode-172000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-172000 status --alsologtostderr": multinode-172000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-172000 status --alsologtostderr": multinode-172000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (31.371125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (1.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-172000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-172000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.18542s)

                                                
                                                
-- stdout --
	* [multinode-172000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-172000" primary control-plane node in "multinode-172000" cluster
	* Restarting existing qemu2 VM for "multinode-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:36:14.036778    3630 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:36:14.036886    3630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:36:14.036889    3630 out.go:304] Setting ErrFile to fd 2...
	I0307 14:36:14.036892    3630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:36:14.037010    3630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:36:14.038003    3630 out.go:298] Setting JSON to false
	I0307 14:36:14.054083    3630 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3931,"bootTime":1709847043,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:36:14.054142    3630 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:36:14.058049    3630 out.go:177] * [multinode-172000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:36:14.065817    3630 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:36:14.065876    3630 notify.go:220] Checking for updates...
	I0307 14:36:14.069817    3630 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:36:14.072777    3630 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:36:14.075809    3630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:36:14.078881    3630 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:36:14.081743    3630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:36:14.085011    3630 config.go:182] Loaded profile config "multinode-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:36:14.085267    3630 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:36:14.089781    3630 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 14:36:14.096790    3630 start.go:297] selected driver: qemu2
	I0307 14:36:14.096796    3630 start.go:901] validating driver "qemu2" against &{Name:multinode-172000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-172000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:36:14.096853    3630 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:36:14.099104    3630 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:36:14.099152    3630 cni.go:84] Creating CNI manager for ""
	I0307 14:36:14.099158    3630 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0307 14:36:14.099203    3630 start.go:340] cluster config:
	{Name:multinode-172000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-172000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:36:14.103605    3630 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:36:14.110815    3630 out.go:177] * Starting "multinode-172000" primary control-plane node in "multinode-172000" cluster
	I0307 14:36:14.114807    3630 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:36:14.114823    3630 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:36:14.114841    3630 cache.go:56] Caching tarball of preloaded images
	I0307 14:36:14.114903    3630 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:36:14.114910    3630 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:36:14.114973    3630 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/multinode-172000/config.json ...
	I0307 14:36:14.115448    3630 start.go:360] acquireMachinesLock for multinode-172000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:36:14.115474    3630 start.go:364] duration metric: took 19.708µs to acquireMachinesLock for "multinode-172000"
	I0307 14:36:14.115481    3630 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:36:14.115488    3630 fix.go:54] fixHost starting: 
	I0307 14:36:14.115609    3630 fix.go:112] recreateIfNeeded on multinode-172000: state=Stopped err=<nil>
	W0307 14:36:14.115618    3630 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:36:14.123793    3630 out.go:177] * Restarting existing qemu2 VM for "multinode-172000" ...
	I0307 14:36:14.127825    3630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c0:e3:0f:13:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2
	I0307 14:36:14.129853    3630 main.go:141] libmachine: STDOUT: 
	I0307 14:36:14.129869    3630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:36:14.129898    3630 fix.go:56] duration metric: took 14.408833ms for fixHost
	I0307 14:36:14.129904    3630 start.go:83] releasing machines lock for "multinode-172000", held for 14.42625ms
	W0307 14:36:14.129909    3630 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:36:14.129948    3630 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:36:14.129953    3630 start.go:728] Will try again in 5 seconds ...
	I0307 14:36:19.132096    3630 start.go:360] acquireMachinesLock for multinode-172000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:36:19.132380    3630 start.go:364] duration metric: took 221.917µs to acquireMachinesLock for "multinode-172000"
	I0307 14:36:19.132504    3630 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:36:19.132528    3630 fix.go:54] fixHost starting: 
	I0307 14:36:19.133157    3630 fix.go:112] recreateIfNeeded on multinode-172000: state=Stopped err=<nil>
	W0307 14:36:19.133181    3630 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:36:19.138657    3630 out.go:177] * Restarting existing qemu2 VM for "multinode-172000" ...
	I0307 14:36:19.146716    3630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c0:e3:0f:13:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/multinode-172000/disk.qcow2
	I0307 14:36:19.156029    3630 main.go:141] libmachine: STDOUT: 
	I0307 14:36:19.156097    3630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:36:19.156171    3630 fix.go:56] duration metric: took 23.649334ms for fixHost
	I0307 14:36:19.156192    3630 start.go:83] releasing machines lock for "multinode-172000", held for 23.787584ms
	W0307 14:36:19.156402    3630 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-172000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-172000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:36:19.163491    3630 out.go:177] 
	W0307 14:36:19.167615    3630 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:36:19.167644    3630 out.go:239] * 
	* 
	W0307 14:36:19.170019    3630 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:36:19.178641    3630 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-172000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (69.729375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-172000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-172000-m01 --driver=qemu2 
E0307 14:36:20.517273    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-172000-m01 --driver=qemu2 : exit status 80 (9.869604584s)

                                                
                                                
-- stdout --
	* [multinode-172000-m01] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-172000-m01" primary control-plane node in "multinode-172000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-172000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-172000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-172000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-172000-m02 --driver=qemu2 : exit status 80 (9.900196084s)

                                                
                                                
-- stdout --
	* [multinode-172000-m02] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-172000-m02" primary control-plane node in "multinode-172000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-172000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-172000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-172000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-172000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-172000: exit status 83 (82.545042ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-172000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-172000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-172000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-172000 -n multinode-172000: exit status 7 (32.267875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.03s)

                                                
                                    
x
+
TestPreload (9.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-690000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-690000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.783013s)

                                                
                                                
-- stdout --
	* [test-preload-690000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-690000" primary control-plane node in "test-preload-690000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-690000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:36:39.454484    3690 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:36:39.454611    3690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:36:39.454614    3690 out.go:304] Setting ErrFile to fd 2...
	I0307 14:36:39.454616    3690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:36:39.454739    3690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:36:39.455783    3690 out.go:298] Setting JSON to false
	I0307 14:36:39.472147    3690 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3956,"bootTime":1709847043,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:36:39.472205    3690 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:36:39.478498    3690 out.go:177] * [test-preload-690000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:36:39.485507    3690 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:36:39.485561    3690 notify.go:220] Checking for updates...
	I0307 14:36:39.490466    3690 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:36:39.493463    3690 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:36:39.496387    3690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:36:39.499489    3690 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:36:39.502450    3690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:36:39.505784    3690 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:36:39.505831    3690 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:36:39.510419    3690 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:36:39.517419    3690 start.go:297] selected driver: qemu2
	I0307 14:36:39.517426    3690 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:36:39.517435    3690 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:36:39.519766    3690 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:36:39.522400    3690 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:36:39.525526    3690 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:36:39.525565    3690 cni.go:84] Creating CNI manager for ""
	I0307 14:36:39.525573    3690 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:36:39.525577    3690 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 14:36:39.525605    3690 start.go:340] cluster config:
	{Name:test-preload-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-690000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:36:39.529974    3690 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:36:39.537429    3690 out.go:177] * Starting "test-preload-690000" primary control-plane node in "test-preload-690000" cluster
	I0307 14:36:39.541264    3690 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0307 14:36:39.541347    3690 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/test-preload-690000/config.json ...
	I0307 14:36:39.541363    3690 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/test-preload-690000/config.json: {Name:mk643da7cc79d2d0c3a0bdddea8d2f9b53a13058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:36:39.541377    3690 cache.go:107] acquiring lock: {Name:mkfdec6903d3d5ee53708053d4a89336c4a1f76b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:36:39.541387    3690 cache.go:107] acquiring lock: {Name:mk9de3588fc52383cd9dec53101c8bae12443442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:36:39.541394    3690 cache.go:107] acquiring lock: {Name:mk23aa3211ea46323b59efe42ffb016d7b817892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:36:39.541560    3690 cache.go:107] acquiring lock: {Name:mke099ba51ee9ea750b40f58dea62dbdd88ac0be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:36:39.541591    3690 cache.go:107] acquiring lock: {Name:mkfbe12a419865d27a1b0ed822c841958095ae29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:36:39.541662    3690 cache.go:107] acquiring lock: {Name:mk9f401d00b87ef8384533585097edb0c80b7988 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:36:39.541671    3690 cache.go:107] acquiring lock: {Name:mkca25c43c3dbad1b3701e571d0b02521766205a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:36:39.541707    3690 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0307 14:36:39.541707    3690 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0307 14:36:39.541688    3690 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0307 14:36:39.541699    3690 cache.go:107] acquiring lock: {Name:mkae42e63e261422e6d14c8bd0df59eefe7ca288 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:36:39.541789    3690 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0307 14:36:39.541829    3690 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:36:39.541831    3690 start.go:360] acquireMachinesLock for test-preload-690000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:36:39.541892    3690 start.go:364] duration metric: took 29.125µs to acquireMachinesLock for "test-preload-690000"
	I0307 14:36:39.541835    3690 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0307 14:36:39.541950    3690 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:36:39.541905    3690 start.go:93] Provisioning new machine with config: &{Name:test-preload-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-690000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:36:39.541964    3690 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:36:39.541992    3690 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0307 14:36:39.549451    3690 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:36:39.555008    3690 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:36:39.555442    3690 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0307 14:36:39.555673    3690 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0307 14:36:39.555983    3690 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0307 14:36:39.559747    3690 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0307 14:36:39.559765    3690 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0307 14:36:39.559843    3690 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:36:39.559879    3690 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0307 14:36:39.567860    3690 start.go:159] libmachine.API.Create for "test-preload-690000" (driver="qemu2")
	I0307 14:36:39.567884    3690 client.go:168] LocalClient.Create starting
	I0307 14:36:39.567964    3690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:36:39.567995    3690 main.go:141] libmachine: Decoding PEM data...
	I0307 14:36:39.568005    3690 main.go:141] libmachine: Parsing certificate...
	I0307 14:36:39.568047    3690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:36:39.568069    3690 main.go:141] libmachine: Decoding PEM data...
	I0307 14:36:39.568075    3690 main.go:141] libmachine: Parsing certificate...
	I0307 14:36:39.568429    3690 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:36:39.703778    3690 main.go:141] libmachine: Creating SSH key...
	I0307 14:36:39.815196    3690 main.go:141] libmachine: Creating Disk image...
	I0307 14:36:39.815220    3690 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:36:39.815434    3690 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/disk.qcow2
	I0307 14:36:39.829161    3690 main.go:141] libmachine: STDOUT: 
	I0307 14:36:39.829187    3690 main.go:141] libmachine: STDERR: 
	I0307 14:36:39.829232    3690 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/disk.qcow2 +20000M
	I0307 14:36:39.841229    3690 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:36:39.841251    3690 main.go:141] libmachine: STDERR: 
	I0307 14:36:39.841264    3690 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/disk.qcow2
	I0307 14:36:39.841267    3690 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:36:39.841303    3690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:97:7c:35:f3:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/disk.qcow2
	I0307 14:36:39.843440    3690 main.go:141] libmachine: STDOUT: 
	I0307 14:36:39.843469    3690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:36:39.843490    3690 client.go:171] duration metric: took 275.599958ms to LocalClient.Create
	I0307 14:36:41.704144    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0307 14:36:41.755332    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0307 14:36:41.796462    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0307 14:36:41.803069    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0307 14:36:41.803471    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0307 14:36:41.817039    3690 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0307 14:36:41.817135    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0307 14:36:41.829175    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0307 14:36:41.844641    3690 start.go:128] duration metric: took 2.302655667s to createHost
	I0307 14:36:41.844677    3690 start.go:83] releasing machines lock for "test-preload-690000", held for 2.302772041s
	W0307 14:36:41.844749    3690 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:36:41.860634    3690 out.go:177] * Deleting "test-preload-690000" in qemu2 ...
	W0307 14:36:41.883529    3690 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:36:41.883559    3690 start.go:728] Will try again in 5 seconds ...
	I0307 14:36:41.951488    3690 cache.go:157] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0307 14:36:41.951532    3690 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.410002666s
	I0307 14:36:41.951574    3690 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0307 14:36:42.308921    3690 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0307 14:36:42.309032    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0307 14:36:43.398983    3690 cache.go:157] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0307 14:36:43.399035    3690 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.8576395s
	I0307 14:36:43.399059    3690 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0307 14:36:44.200689    3690 cache.go:157] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0307 14:36:44.200738    3690 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.659348917s
	I0307 14:36:44.200775    3690 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0307 14:36:44.611582    3690 cache.go:157] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0307 14:36:44.611643    3690 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 5.069928792s
	I0307 14:36:44.611667    3690 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0307 14:36:45.117954    3690 cache.go:157] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0307 14:36:45.117999    3690 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.576416792s
	I0307 14:36:45.118025    3690 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0307 14:36:45.605215    3690 cache.go:157] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0307 14:36:45.605279    3690 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.063885541s
	I0307 14:36:45.605304    3690 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0307 14:36:46.011820    3690 cache.go:157] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0307 14:36:46.011872    3690 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.470247042s
	I0307 14:36:46.011897    3690 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0307 14:36:46.883741    3690 start.go:360] acquireMachinesLock for test-preload-690000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:36:46.884201    3690 start.go:364] duration metric: took 373.208µs to acquireMachinesLock for "test-preload-690000"
	I0307 14:36:46.884330    3690 start.go:93] Provisioning new machine with config: &{Name:test-preload-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-690000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:36:46.884561    3690 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:36:46.895202    3690 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:36:46.943739    3690 start.go:159] libmachine.API.Create for "test-preload-690000" (driver="qemu2")
	I0307 14:36:46.943799    3690 client.go:168] LocalClient.Create starting
	I0307 14:36:46.943926    3690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:36:46.943992    3690 main.go:141] libmachine: Decoding PEM data...
	I0307 14:36:46.944015    3690 main.go:141] libmachine: Parsing certificate...
	I0307 14:36:46.944085    3690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:36:46.944126    3690 main.go:141] libmachine: Decoding PEM data...
	I0307 14:36:46.944147    3690 main.go:141] libmachine: Parsing certificate...
	I0307 14:36:46.944685    3690 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:36:47.095469    3690 main.go:141] libmachine: Creating SSH key...
	I0307 14:36:47.140168    3690 main.go:141] libmachine: Creating Disk image...
	I0307 14:36:47.140173    3690 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:36:47.140339    3690 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/disk.qcow2
	I0307 14:36:47.152982    3690 main.go:141] libmachine: STDOUT: 
	I0307 14:36:47.153003    3690 main.go:141] libmachine: STDERR: 
	I0307 14:36:47.153062    3690 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/disk.qcow2 +20000M
	I0307 14:36:47.164068    3690 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:36:47.164086    3690 main.go:141] libmachine: STDERR: 
	I0307 14:36:47.164096    3690 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/disk.qcow2
	I0307 14:36:47.164106    3690 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:36:47.164159    3690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:59:bb:8a:68:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/test-preload-690000/disk.qcow2
	I0307 14:36:47.166044    3690 main.go:141] libmachine: STDOUT: 
	I0307 14:36:47.166065    3690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:36:47.166080    3690 client.go:171] duration metric: took 222.275833ms to LocalClient.Create
	I0307 14:36:49.167602    3690 start.go:128] duration metric: took 2.283003167s to createHost
	I0307 14:36:49.167656    3690 start.go:83] releasing machines lock for "test-preload-690000", held for 2.283423042s
	W0307 14:36:49.167867    3690 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-690000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-690000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:36:49.177311    3690 out.go:177] 
	W0307 14:36:49.180335    3690 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:36:49.180364    3690 out.go:239] * 
	* 
	W0307 14:36:49.183058    3690 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:36:49.191296    3690 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-690000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-07 14:36:49.210184 -0800 PST m=+3021.705751710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-690000 -n test-preload-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-690000 -n test-preload-690000: exit status 7 (65.440458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-690000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-690000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-690000
--- FAIL: TestPreload (9.96s)

                                                
                                    
x
+
TestScheduledStopUnix (10.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-894000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-894000 --memory=2048 --driver=qemu2 : exit status 80 (9.90660775s)

                                                
                                                
-- stdout --
	* [scheduled-stop-894000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-894000" primary control-plane node in "scheduled-stop-894000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-894000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-894000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-894000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-894000" primary control-plane node in "scheduled-stop-894000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-894000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-894000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-07 14:36:59.28937 -0800 PST m=+3031.784918418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-894000 -n scheduled-stop-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-894000 -n scheduled-stop-894000: exit status 7 (70.491333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-894000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-894000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-894000
--- FAIL: TestScheduledStopUnix (10.08s)

                                                
                                    
x
+
TestSkaffold (16.58s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1304462463 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-075000 --memory=2600 --driver=qemu2 
E0307 14:37:13.937306    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-075000 --memory=2600 --driver=qemu2 : exit status 80 (9.757461917s)

                                                
                                                
-- stdout --
	* [skaffold-075000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-075000" primary control-plane node in "skaffold-075000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-075000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-075000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-075000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-075000" primary control-plane node in "skaffold-075000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-075000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-075000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-07 14:37:15.873887 -0800 PST m=+3048.369404043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-075000 -n skaffold-075000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-075000 -n skaffold-075000: exit status 7 (63.925459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-075000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-075000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-075000
--- FAIL: TestSkaffold (16.58s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (629.28s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3202108957 start -p running-upgrade-974000 --memory=2200 --vm-driver=qemu2 
E0307 14:38:17.447143    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3202108957 start -p running-upgrade-974000 --memory=2200 --vm-driver=qemu2 : (1m22.9582565s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-974000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-974000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m28.79999425s)

                                                
                                                
-- stdout --
	* [running-upgrade-974000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-974000" primary control-plane node in "running-upgrade-974000" cluster
	* Updating the running qemu2 "running-upgrade-974000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:39:24.555421    4112 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:39:24.555561    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:39:24.555565    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:39:24.555567    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:39:24.555690    4112 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:39:24.556777    4112 out.go:298] Setting JSON to false
	I0307 14:39:24.573674    4112 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4121,"bootTime":1709847043,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:39:24.573741    4112 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:39:24.579568    4112 out.go:177] * [running-upgrade-974000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:39:24.585531    4112 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:39:24.585604    4112 notify.go:220] Checking for updates...
	I0307 14:39:24.592510    4112 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:39:24.595437    4112 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:39:24.598474    4112 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:39:24.601512    4112 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:39:24.604685    4112 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:39:24.607873    4112 config.go:182] Loaded profile config "running-upgrade-974000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:39:24.611515    4112 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 14:39:24.614515    4112 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:39:24.618480    4112 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 14:39:24.625451    4112 start.go:297] selected driver: qemu2
	I0307 14:39:24.625463    4112 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-974000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50298 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-974000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 14:39:24.625514    4112 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:39:24.627935    4112 cni.go:84] Creating CNI manager for ""
	I0307 14:39:24.627954    4112 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:39:24.627994    4112 start.go:340] cluster config:
	{Name:running-upgrade-974000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50298 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-974000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 14:39:24.628047    4112 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:39:24.631561    4112 out.go:177] * Starting "running-upgrade-974000" primary control-plane node in "running-upgrade-974000" cluster
	I0307 14:39:24.639486    4112 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 14:39:24.639504    4112 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0307 14:39:24.639515    4112 cache.go:56] Caching tarball of preloaded images
	I0307 14:39:24.639585    4112 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:39:24.639590    4112 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0307 14:39:24.639651    4112 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/config.json ...
	I0307 14:39:24.640240    4112 start.go:360] acquireMachinesLock for running-upgrade-974000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:39:24.640283    4112 start.go:364] duration metric: took 36.25µs to acquireMachinesLock for "running-upgrade-974000"
	I0307 14:39:24.640291    4112 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:39:24.640295    4112 fix.go:54] fixHost starting: 
	I0307 14:39:24.641080    4112 fix.go:112] recreateIfNeeded on running-upgrade-974000: state=Running err=<nil>
	W0307 14:39:24.641092    4112 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:39:24.648471    4112 out.go:177] * Updating the running qemu2 "running-upgrade-974000" VM ...
	I0307 14:39:24.652504    4112 machine.go:94] provisionDockerMachine start ...
	I0307 14:39:24.652539    4112 main.go:141] libmachine: Using SSH client type: native
	I0307 14:39:24.652662    4112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee9a30] 0x100eec290 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0307 14:39:24.652668    4112 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 14:39:24.718432    4112 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-974000
	
	I0307 14:39:24.718448    4112 buildroot.go:166] provisioning hostname "running-upgrade-974000"
	I0307 14:39:24.718494    4112 main.go:141] libmachine: Using SSH client type: native
	I0307 14:39:24.718620    4112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee9a30] 0x100eec290 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0307 14:39:24.718626    4112 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-974000 && echo "running-upgrade-974000" | sudo tee /etc/hostname
	I0307 14:39:24.787182    4112 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-974000
	
	I0307 14:39:24.787230    4112 main.go:141] libmachine: Using SSH client type: native
	I0307 14:39:24.787346    4112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee9a30] 0x100eec290 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0307 14:39:24.787356    4112 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-974000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-974000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-974000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 14:39:24.851945    4112 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 14:39:24.851954    4112 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18320-1153/.minikube CaCertPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18320-1153/.minikube}
	I0307 14:39:24.851961    4112 buildroot.go:174] setting up certificates
	I0307 14:39:24.851967    4112 provision.go:84] configureAuth start
	I0307 14:39:24.851973    4112 provision.go:143] copyHostCerts
	I0307 14:39:24.852032    4112 exec_runner.go:144] found /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.pem, removing ...
	I0307 14:39:24.852038    4112 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.pem
	I0307 14:39:24.852174    4112 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.pem (1078 bytes)
	I0307 14:39:24.852364    4112 exec_runner.go:144] found /Users/jenkins/minikube-integration/18320-1153/.minikube/cert.pem, removing ...
	I0307 14:39:24.852368    4112 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18320-1153/.minikube/cert.pem
	I0307 14:39:24.852425    4112 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18320-1153/.minikube/cert.pem (1123 bytes)
	I0307 14:39:24.852529    4112 exec_runner.go:144] found /Users/jenkins/minikube-integration/18320-1153/.minikube/key.pem, removing ...
	I0307 14:39:24.852533    4112 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18320-1153/.minikube/key.pem
	I0307 14:39:24.852590    4112 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18320-1153/.minikube/key.pem (1679 bytes)
	I0307 14:39:24.852699    4112 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-974000 san=[127.0.0.1 localhost minikube running-upgrade-974000]
	I0307 14:39:24.954521    4112 provision.go:177] copyRemoteCerts
	I0307 14:39:24.954556    4112 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 14:39:24.954565    4112 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/running-upgrade-974000/id_rsa Username:docker}
	I0307 14:39:24.988093    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 14:39:24.994762    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0307 14:39:25.002078    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 14:39:25.009425    4112 provision.go:87] duration metric: took 157.4505ms to configureAuth
	I0307 14:39:25.009433    4112 buildroot.go:189] setting minikube options for container-runtime
	I0307 14:39:25.009529    4112 config.go:182] Loaded profile config "running-upgrade-974000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:39:25.009580    4112 main.go:141] libmachine: Using SSH client type: native
	I0307 14:39:25.009667    4112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee9a30] 0x100eec290 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0307 14:39:25.009671    4112 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 14:39:25.073660    4112 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 14:39:25.073669    4112 buildroot.go:70] root file system type: tmpfs
	I0307 14:39:25.073724    4112 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 14:39:25.073783    4112 main.go:141] libmachine: Using SSH client type: native
	I0307 14:39:25.073902    4112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee9a30] 0x100eec290 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0307 14:39:25.073937    4112 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 14:39:25.141542    4112 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 14:39:25.141598    4112 main.go:141] libmachine: Using SSH client type: native
	I0307 14:39:25.141728    4112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee9a30] 0x100eec290 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0307 14:39:25.141737    4112 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 14:39:25.206060    4112 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 14:39:25.206071    4112 machine.go:97] duration metric: took 553.559917ms to provisionDockerMachine
	I0307 14:39:25.206076    4112 start.go:293] postStartSetup for "running-upgrade-974000" (driver="qemu2")
	I0307 14:39:25.206082    4112 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 14:39:25.206144    4112 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 14:39:25.206153    4112 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/running-upgrade-974000/id_rsa Username:docker}
	I0307 14:39:25.238546    4112 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 14:39:25.239973    4112 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 14:39:25.239980    4112 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18320-1153/.minikube/addons for local assets ...
	I0307 14:39:25.240050    4112 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18320-1153/.minikube/files for local assets ...
	I0307 14:39:25.240165    4112 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18320-1153/.minikube/files/etc/ssl/certs/15962.pem -> 15962.pem in /etc/ssl/certs
	I0307 14:39:25.240281    4112 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 14:39:25.242847    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/files/etc/ssl/certs/15962.pem --> /etc/ssl/certs/15962.pem (1708 bytes)
	I0307 14:39:25.249937    4112 start.go:296] duration metric: took 43.855833ms for postStartSetup
	I0307 14:39:25.249949    4112 fix.go:56] duration metric: took 609.653667ms for fixHost
	I0307 14:39:25.249988    4112 main.go:141] libmachine: Using SSH client type: native
	I0307 14:39:25.250088    4112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee9a30] 0x100eec290 <nil>  [] 0s} localhost 50266 <nil> <nil>}
	I0307 14:39:25.250093    4112 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0307 14:39:25.311268    4112 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709851165.798567140
	
	I0307 14:39:25.311277    4112 fix.go:216] guest clock: 1709851165.798567140
	I0307 14:39:25.311280    4112 fix.go:229] Guest: 2024-03-07 14:39:25.79856714 -0800 PST Remote: 2024-03-07 14:39:25.249951 -0800 PST m=+0.717617001 (delta=548.61614ms)
	I0307 14:39:25.311290    4112 fix.go:200] guest clock delta is within tolerance: 548.61614ms
	I0307 14:39:25.311294    4112 start.go:83] releasing machines lock for "running-upgrade-974000", held for 671.0045ms
	I0307 14:39:25.311347    4112 ssh_runner.go:195] Run: cat /version.json
	I0307 14:39:25.311355    4112 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/running-upgrade-974000/id_rsa Username:docker}
	I0307 14:39:25.311372    4112 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 14:39:25.311387    4112 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/running-upgrade-974000/id_rsa Username:docker}
	W0307 14:39:25.312060    4112 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50379->127.0.0.1:50266: read: connection reset by peer
	I0307 14:39:25.312070    4112 retry.go:31] will retry after 139.494711ms: ssh: handshake failed: read tcp 127.0.0.1:50379->127.0.0.1:50266: read: connection reset by peer
	W0307 14:39:25.486657    4112 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0307 14:39:25.486739    4112 ssh_runner.go:195] Run: systemctl --version
	I0307 14:39:25.488864    4112 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 14:39:25.490580    4112 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 14:39:25.490607    4112 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0307 14:39:25.493516    4112 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0307 14:39:25.497738    4112 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 14:39:25.497744    4112 start.go:494] detecting cgroup driver to use...
	I0307 14:39:25.497822    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 14:39:25.504346    4112 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0307 14:39:25.507732    4112 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 14:39:25.510594    4112 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 14:39:25.510622    4112 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 14:39:25.513638    4112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 14:39:25.516629    4112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 14:39:25.519375    4112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 14:39:25.522205    4112 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 14:39:25.525119    4112 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 14:39:25.528467    4112 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 14:39:25.531261    4112 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 14:39:25.534216    4112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:39:25.630403    4112 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 14:39:25.640400    4112 start.go:494] detecting cgroup driver to use...
	I0307 14:39:25.640481    4112 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 14:39:25.645439    4112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 14:39:25.650566    4112 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 14:39:25.660644    4112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 14:39:25.665092    4112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 14:39:25.670876    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 14:39:25.676456    4112 ssh_runner.go:195] Run: which cri-dockerd
	I0307 14:39:25.677644    4112 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 14:39:25.680699    4112 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 14:39:25.685553    4112 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 14:39:25.781913    4112 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 14:39:25.878721    4112 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 14:39:25.878790    4112 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 14:39:25.884226    4112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:39:25.973271    4112 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 14:39:28.558317    4112 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.585024791s)
	I0307 14:39:28.558394    4112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 14:39:28.563202    4112 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 14:39:28.570428    4112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 14:39:28.575835    4112 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 14:39:28.654872    4112 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 14:39:28.739978    4112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:39:28.822350    4112 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 14:39:28.828664    4112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 14:39:28.833216    4112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:39:28.922858    4112 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 14:39:28.961980    4112 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 14:39:28.962064    4112 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 14:39:28.964036    4112 start.go:562] Will wait 60s for crictl version
	I0307 14:39:28.964072    4112 ssh_runner.go:195] Run: which crictl
	I0307 14:39:28.965588    4112 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 14:39:28.977598    4112 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0307 14:39:28.977670    4112 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 14:39:28.990704    4112 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 14:39:29.011775    4112 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0307 14:39:29.011901    4112 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0307 14:39:29.013274    4112 kubeadm.go:877] updating cluster {Name:running-upgrade-974000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50298 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-974000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0307 14:39:29.013319    4112 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 14:39:29.013360    4112 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 14:39:29.024305    4112 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 14:39:29.024313    4112 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 14:39:29.024361    4112 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 14:39:29.027289    4112 ssh_runner.go:195] Run: which lz4
	I0307 14:39:29.028509    4112 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0307 14:39:29.029627    4112 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 14:39:29.029638    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0307 14:39:29.700819    4112 docker.go:649] duration metric: took 672.331875ms to copy over tarball
	I0307 14:39:29.700884    4112 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 14:39:30.861639    4112 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.16074125s)
	I0307 14:39:30.861652    4112 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 14:39:30.878585    4112 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 14:39:30.882051    4112 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0307 14:39:30.887112    4112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:39:30.966959    4112 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 14:39:32.399726    4112 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.43274775s)
	I0307 14:39:32.399817    4112 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 14:39:32.414290    4112 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 14:39:32.414301    4112 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 14:39:32.414307    4112 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0307 14:39:32.420989    4112 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:39:32.421009    4112 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0307 14:39:32.421136    4112 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:39:32.421159    4112 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:39:32.421256    4112 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:39:32.421296    4112 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:39:32.421525    4112 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0307 14:39:32.421584    4112 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:39:32.430139    4112 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:39:32.430221    4112 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:39:32.430223    4112 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:39:32.430300    4112 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0307 14:39:32.430367    4112 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0307 14:39:32.430510    4112 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:39:32.430635    4112 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:39:32.430666    4112 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:39:34.374545    4112 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:39:34.411004    4112 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0307 14:39:34.411054    4112 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:39:34.411155    4112 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:39:34.435674    4112 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0307 14:39:34.442183    4112 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:39:34.456653    4112 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0307 14:39:34.456675    4112 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:39:34.456737    4112 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:39:34.468035    4112 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0307 14:39:34.480551    4112 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0307 14:39:34.480679    4112 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:39:34.482996    4112 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:39:34.492140    4112 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0307 14:39:34.492161    4112 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:39:34.492215    4112 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:39:34.492782    4112 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0307 14:39:34.503672    4112 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:39:34.503719    4112 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0307 14:39:34.503734    4112 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:39:34.503753    4112 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:39:34.504521    4112 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0307 14:39:34.504604    4112 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0307 14:39:34.514927    4112 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0307 14:39:34.514948    4112 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0307 14:39:34.515004    4112 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0307 14:39:34.519626    4112 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0307 14:39:34.520010    4112 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0307 14:39:34.520021    4112 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:39:34.520047    4112 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:39:34.524593    4112 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0307 14:39:34.524632    4112 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0307 14:39:34.524647    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0307 14:39:34.551915    4112 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0307 14:39:34.551941    4112 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0307 14:39:34.551998    4112 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0307 14:39:34.555603    4112 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0307 14:39:34.555695    4112 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0307 14:39:34.565538    4112 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0307 14:39:34.583150    4112 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0307 14:39:34.583164    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0307 14:39:34.585689    4112 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0307 14:39:34.585750    4112 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0307 14:39:34.585767    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0307 14:39:34.629227    4112 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0307 14:39:34.629247    4112 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0307 14:39:34.629253    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0307 14:39:34.655894    4112 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0307 14:39:35.232072    4112 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0307 14:39:35.232248    4112 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:39:35.250022    4112 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0307 14:39:35.250049    4112 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:39:35.250124    4112 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:39:36.700951    4112 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.450785375s)
	I0307 14:39:36.700996    4112 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0307 14:39:36.701455    4112 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0307 14:39:36.706648    4112 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0307 14:39:36.706721    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0307 14:39:36.765090    4112 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0307 14:39:36.765104    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0307 14:39:37.004879    4112 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0307 14:39:37.004921    4112 cache_images.go:92] duration metric: took 4.590596875s to LoadCachedImages
	W0307 14:39:37.004961    4112 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0307 14:39:37.004966    4112 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0307 14:39:37.005010    4112 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-974000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-974000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 14:39:37.005087    4112 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 14:39:37.018576    4112 cni.go:84] Creating CNI manager for ""
	I0307 14:39:37.018587    4112 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:39:37.018592    4112 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 14:39:37.018600    4112 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-974000 NodeName:running-upgrade-974000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 14:39:37.018663    4112 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-974000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 14:39:37.018716    4112 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0307 14:39:37.022286    4112 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 14:39:37.022313    4112 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 14:39:37.025412    4112 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0307 14:39:37.030401    4112 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 14:39:37.035600    4112 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0307 14:39:37.040727    4112 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0307 14:39:37.042013    4112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:39:37.126971    4112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 14:39:37.132512    4112 certs.go:68] Setting up /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000 for IP: 10.0.2.15
	I0307 14:39:37.132517    4112 certs.go:194] generating shared ca certs ...
	I0307 14:39:37.132526    4112 certs.go:226] acquiring lock for ca certs: {Name:mkd070f5c39ed755a49409426e8106f4ed85635d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:39:37.132661    4112 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.key
	I0307 14:39:37.132691    4112 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.key
	I0307 14:39:37.132696    4112 certs.go:256] generating profile certs ...
	I0307 14:39:37.132754    4112 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/client.key
	I0307 14:39:37.132771    4112 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/apiserver.key.f565e11d
	I0307 14:39:37.132783    4112 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/apiserver.crt.f565e11d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0307 14:39:37.203960    4112 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/apiserver.crt.f565e11d ...
	I0307 14:39:37.203969    4112 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/apiserver.crt.f565e11d: {Name:mk97ab7aefb2b4c8d038bcb668e4e947d944f7a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:39:37.204253    4112 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/apiserver.key.f565e11d ...
	I0307 14:39:37.204260    4112 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/apiserver.key.f565e11d: {Name:mkc92dc1936f4200c79a801f746cf7ef55776ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:39:37.204418    4112 certs.go:381] copying /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/apiserver.crt.f565e11d -> /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/apiserver.crt
	I0307 14:39:37.204563    4112 certs.go:385] copying /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/apiserver.key.f565e11d -> /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/apiserver.key
	I0307 14:39:37.204700    4112 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/proxy-client.key
	I0307 14:39:37.204808    4112 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/1596.pem (1338 bytes)
	W0307 14:39:37.204828    4112 certs.go:480] ignoring /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/1596_empty.pem, impossibly tiny 0 bytes
	I0307 14:39:37.204832    4112 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 14:39:37.204848    4112 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem (1078 bytes)
	I0307 14:39:37.204864    4112 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem (1123 bytes)
	I0307 14:39:37.204880    4112 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/key.pem (1679 bytes)
	I0307 14:39:37.204916    4112 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/files/etc/ssl/certs/15962.pem (1708 bytes)
	I0307 14:39:37.205254    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 14:39:37.212474    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 14:39:37.219144    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 14:39:37.226135    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 14:39:37.234146    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0307 14:39:37.241677    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 14:39:37.248656    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 14:39:37.255768    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 14:39:37.263620    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/files/etc/ssl/certs/15962.pem --> /usr/share/ca-certificates/15962.pem (1708 bytes)
	I0307 14:39:37.270302    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 14:39:37.277737    4112 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/1596.pem --> /usr/share/ca-certificates/1596.pem (1338 bytes)
	I0307 14:39:37.284818    4112 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 14:39:37.289735    4112 ssh_runner.go:195] Run: openssl version
	I0307 14:39:37.291497    4112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 14:39:37.294515    4112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 14:39:37.296002    4112 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I0307 14:39:37.296020    4112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 14:39:37.297824    4112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 14:39:37.301107    4112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1596.pem && ln -fs /usr/share/ca-certificates/1596.pem /etc/ssl/certs/1596.pem"
	I0307 14:39:37.304001    4112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1596.pem
	I0307 14:39:37.305361    4112 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 21:56 /usr/share/ca-certificates/1596.pem
	I0307 14:39:37.305382    4112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1596.pem
	I0307 14:39:37.307090    4112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1596.pem /etc/ssl/certs/51391683.0"
	I0307 14:39:37.310061    4112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15962.pem && ln -fs /usr/share/ca-certificates/15962.pem /etc/ssl/certs/15962.pem"
	I0307 14:39:37.313535    4112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15962.pem
	I0307 14:39:37.314891    4112 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 21:56 /usr/share/ca-certificates/15962.pem
	I0307 14:39:37.314910    4112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15962.pem
	I0307 14:39:37.316766    4112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15962.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 14:39:37.319325    4112 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 14:39:37.320716    4112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 14:39:37.322532    4112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 14:39:37.324254    4112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 14:39:37.325956    4112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 14:39:37.327990    4112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 14:39:37.329674    4112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 14:39:37.331491    4112 kubeadm.go:391] StartCluster: {Name:running-upgrade-974000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50298 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-974000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 14:39:37.331556    4112 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 14:39:37.342036    4112 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 14:39:37.345765    4112 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 14:39:37.345771    4112 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 14:39:37.345773    4112 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 14:39:37.345792    4112 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 14:39:37.348936    4112 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 14:39:37.349155    4112 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-974000" does not appear in /Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:39:37.349214    4112 kubeconfig.go:62] /Users/jenkins/minikube-integration/18320-1153/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-974000" cluster setting kubeconfig missing "running-upgrade-974000" context setting]
	I0307 14:39:37.349347    4112 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/kubeconfig: {Name:mk8faa4add6483df163f8e33c2463984a39a544c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:39:37.350514    4112 kapi.go:59] client config for running-upgrade-974000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/client.key", CAFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1021df6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 14:39:37.351012    4112 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 14:39:37.354717    4112 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-974000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0307 14:39:37.354725    4112 kubeadm.go:1153] stopping kube-system containers ...
	I0307 14:39:37.354775    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 14:39:37.370465    4112 docker.go:483] Stopping containers: [935b43f2f4ec 049c5d808bb5 860fe57b8177 e5c53745615e 61231a9d2be0 901e9188be70 515fe03e3ced d881a77e3c85 231c71fe1af3 d06a3786ab57 9978d3d3fd89 a5875dd5601e f7103dc3ed72 859aac2b3e3c]
	I0307 14:39:37.370540    4112 ssh_runner.go:195] Run: docker stop 935b43f2f4ec 049c5d808bb5 860fe57b8177 e5c53745615e 61231a9d2be0 901e9188be70 515fe03e3ced d881a77e3c85 231c71fe1af3 d06a3786ab57 9978d3d3fd89 a5875dd5601e f7103dc3ed72 859aac2b3e3c
	I0307 14:39:37.381418    4112 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 14:39:37.471666    4112 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 14:39:37.475743    4112 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Mar  7 22:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Mar  7 22:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar  7 22:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Mar  7 22:39 /etc/kubernetes/scheduler.conf
	
	I0307 14:39:37.475793    4112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/admin.conf
	I0307 14:39:37.479075    4112 kubeadm.go:162] "https://control-plane.minikube.internal:50298" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 14:39:37.479102    4112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 14:39:37.482452    4112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/kubelet.conf
	I0307 14:39:37.485808    4112 kubeadm.go:162] "https://control-plane.minikube.internal:50298" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 14:39:37.485829    4112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 14:39:37.489259    4112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/controller-manager.conf
	I0307 14:39:37.492173    4112 kubeadm.go:162] "https://control-plane.minikube.internal:50298" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 14:39:37.492205    4112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 14:39:37.494902    4112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/scheduler.conf
	I0307 14:39:37.497739    4112 kubeadm.go:162] "https://control-plane.minikube.internal:50298" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 14:39:37.497762    4112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 14:39:37.500642    4112 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 14:39:37.503196    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:39:37.534035    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:39:37.933298    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:39:38.146520    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:39:38.167805    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:39:38.189519    4112 api_server.go:52] waiting for apiserver process to appear ...
	I0307 14:39:38.189607    4112 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 14:39:38.691867    4112 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 14:39:39.191676    4112 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 14:39:39.196080    4112 api_server.go:72] duration metric: took 1.006561208s to wait for apiserver process to appear ...
	I0307 14:39:39.196089    4112 api_server.go:88] waiting for apiserver healthz status ...
	I0307 14:39:39.196098    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:39:44.198275    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:39:44.198328    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:39:49.198809    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:39:49.198891    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:39:54.199834    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:39:54.199907    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:39:59.201478    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:39:59.201548    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:40:04.202257    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:40:04.202343    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:40:09.204155    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:40:09.204239    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:40:14.206670    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:40:14.206756    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:40:19.209368    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:40:19.209434    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:40:24.211941    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:40:24.212015    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:40:29.214672    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:40:29.214753    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:40:34.217423    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:40:34.217501    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:40:39.219008    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:40:39.219238    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:40:39.237958    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:40:39.238049    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:40:39.251918    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:40:39.251999    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:40:39.263328    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:40:39.263395    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:40:39.274204    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:40:39.274278    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:40:39.284454    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:40:39.284518    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:40:39.294907    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:40:39.294977    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:40:39.305264    4112 logs.go:276] 0 containers: []
	W0307 14:40:39.305277    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:40:39.305345    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:40:39.315961    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:40:39.315983    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:40:39.315989    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:40:39.329422    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:40:39.329432    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:40:39.349980    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:40:39.349993    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:40:39.368676    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:40:39.368686    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:40:39.441659    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:40:39.441673    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:40:39.456820    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:40:39.456843    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:40:39.468403    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:40:39.468416    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:40:39.480749    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:40:39.480761    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:40:39.506979    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:40:39.506987    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:40:39.510960    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:40:39.510967    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:40:39.530820    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:40:39.530830    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:40:39.545852    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:40:39.545862    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:40:39.557479    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:40:39.560805    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:40:39.572673    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:40:39.572687    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:40:39.609635    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:40:39.609643    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:40:39.626939    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:40:39.626951    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:40:39.638321    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:40:39.638333    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:40:42.152283    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:40:47.153598    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:40:47.154132    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:40:47.192687    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:40:47.192843    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:40:47.215808    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:40:47.215936    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:40:47.231178    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:40:47.231251    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:40:47.244332    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:40:47.244402    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:40:47.254844    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:40:47.254913    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:40:47.265708    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:40:47.265787    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:40:47.279996    4112 logs.go:276] 0 containers: []
	W0307 14:40:47.280007    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:40:47.280070    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:40:47.290430    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:40:47.290448    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:40:47.290454    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:40:47.316031    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:40:47.316037    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:40:47.362793    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:40:47.362804    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:40:47.374672    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:40:47.374682    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:40:47.385769    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:40:47.385779    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:40:47.400487    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:40:47.400501    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:40:47.413357    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:40:47.413370    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:40:47.430403    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:40:47.430414    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:40:47.467096    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:40:47.467103    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:40:47.480790    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:40:47.480803    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:40:47.494870    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:40:47.494883    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:40:47.506015    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:40:47.506025    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:40:47.510375    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:40:47.510384    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:40:47.529754    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:40:47.529764    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:40:47.540983    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:40:47.540993    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:40:47.558544    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:40:47.558556    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:40:47.569856    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:40:47.569867    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:40:50.083721    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:40:55.086369    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:40:55.086835    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:40:55.126223    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:40:55.126358    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:40:55.148552    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:40:55.148673    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:40:55.170194    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:40:55.170279    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:40:55.181619    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:40:55.181689    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:40:55.192694    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:40:55.192764    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:40:55.210857    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:40:55.210931    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:40:55.222278    4112 logs.go:276] 0 containers: []
	W0307 14:40:55.222289    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:40:55.222352    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:40:55.234135    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:40:55.234152    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:40:55.234160    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:40:55.247853    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:40:55.247865    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:40:55.259590    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:40:55.259600    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:40:55.271148    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:40:55.271161    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:40:55.289937    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:40:55.289949    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:40:55.308767    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:40:55.308777    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:40:55.331364    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:40:55.331375    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:40:55.343593    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:40:55.343607    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:40:55.355858    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:40:55.355867    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:40:55.393243    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:40:55.393255    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:40:55.407608    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:40:55.407620    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:40:55.418839    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:40:55.418849    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:40:55.430399    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:40:55.430408    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:40:55.455932    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:40:55.455940    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:40:55.492002    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:40:55.492011    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:40:55.496384    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:40:55.496390    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:40:55.509061    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:40:55.509073    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:40:58.030843    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:41:03.033533    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:41:03.033939    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:41:03.082704    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:41:03.082834    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:41:03.102935    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:41:03.103064    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:41:03.117305    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:41:03.117386    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:41:03.135803    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:41:03.135874    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:41:03.146985    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:41:03.147056    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:41:03.157629    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:41:03.157701    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:41:03.171314    4112 logs.go:276] 0 containers: []
	W0307 14:41:03.171326    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:41:03.171385    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:41:03.186600    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:41:03.186616    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:41:03.186621    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:41:03.205178    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:41:03.205188    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:41:03.219247    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:41:03.219257    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:41:03.244764    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:41:03.244778    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:41:03.249691    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:41:03.249699    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:41:03.261329    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:41:03.261343    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:41:03.275632    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:41:03.275644    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:41:03.290543    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:41:03.290553    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:41:03.302302    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:41:03.302314    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:41:03.321721    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:41:03.321731    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:41:03.334881    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:41:03.334900    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:41:03.346070    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:41:03.346078    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:41:03.380980    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:41:03.380994    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:41:03.398361    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:41:03.398375    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:41:03.413672    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:41:03.413681    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:41:03.431805    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:41:03.431815    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:41:03.444399    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:41:03.444410    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:41:05.982241    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:41:10.983636    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:41:10.984064    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:41:11.021831    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:41:11.021971    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:41:11.042954    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:41:11.043063    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:41:11.058266    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:41:11.058342    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:41:11.070845    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:41:11.070921    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:41:11.081459    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:41:11.081527    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:41:11.093037    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:41:11.093106    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:41:11.103240    4112 logs.go:276] 0 containers: []
	W0307 14:41:11.103250    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:41:11.103305    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:41:11.113277    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:41:11.113298    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:41:11.113303    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:41:11.132381    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:41:11.132392    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:41:11.147645    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:41:11.147660    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:41:11.159138    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:41:11.159150    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:41:11.171347    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:41:11.171357    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:41:11.208431    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:41:11.208441    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:41:11.222264    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:41:11.222277    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:41:11.239346    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:41:11.239356    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:41:11.250743    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:41:11.250755    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:41:11.275936    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:41:11.275945    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:41:11.286839    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:41:11.286850    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:41:11.298224    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:41:11.298238    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:41:11.312699    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:41:11.312710    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:41:11.330292    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:41:11.330303    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:41:11.334517    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:41:11.334526    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:41:11.368205    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:41:11.368217    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:41:11.382637    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:41:11.382649    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:41:13.902681    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:41:18.905037    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:41:18.905286    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:41:18.932612    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:41:18.932691    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:41:18.944830    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:41:18.944912    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:41:18.956011    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:41:18.956080    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:41:18.970273    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:41:18.970349    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:41:18.981778    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:41:18.981847    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:41:18.993016    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:41:18.993088    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:41:19.004359    4112 logs.go:276] 0 containers: []
	W0307 14:41:19.004371    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:41:19.004428    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:41:19.014988    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:41:19.015008    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:41:19.015014    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:41:19.027632    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:41:19.027646    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:41:19.042731    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:41:19.042747    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:41:19.054599    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:41:19.054611    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:41:19.080177    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:41:19.080188    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:41:19.099336    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:41:19.099347    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:41:19.116864    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:41:19.116875    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:41:19.129179    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:41:19.129190    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:41:19.165412    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:41:19.165422    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:41:19.180502    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:41:19.180512    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:41:19.193350    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:41:19.193360    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:41:19.207158    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:41:19.207170    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:41:19.244425    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:41:19.244443    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:41:19.250165    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:41:19.250176    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:41:19.265632    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:41:19.265648    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:41:19.277944    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:41:19.277959    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:41:19.305864    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:41:19.305875    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:41:21.819789    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:41:26.822376    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:41:26.822518    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:41:26.834596    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:41:26.834667    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:41:26.845774    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:41:26.845859    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:41:26.857080    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:41:26.857152    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:41:26.868670    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:41:26.868747    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:41:26.879259    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:41:26.879327    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:41:26.890580    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:41:26.890652    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:41:26.902228    4112 logs.go:276] 0 containers: []
	W0307 14:41:26.902242    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:41:26.902318    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:41:26.914405    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:41:26.914429    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:41:26.914435    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:41:26.926467    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:41:26.926477    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:41:26.944099    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:41:26.944109    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:41:26.980849    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:41:26.980862    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:41:26.994849    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:41:26.994862    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:41:27.033677    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:41:27.033688    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:41:27.053242    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:41:27.053253    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:41:27.068012    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:41:27.068026    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:41:27.080061    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:41:27.080072    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:41:27.095763    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:41:27.095773    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:41:27.113186    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:41:27.113203    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:41:27.125203    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:41:27.125215    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:41:27.149404    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:41:27.149412    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:41:27.154030    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:41:27.154039    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:41:27.165653    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:41:27.165686    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:41:27.177691    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:41:27.177702    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:41:27.191909    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:41:27.191920    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:41:29.705391    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:41:34.707495    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:41:34.707846    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:41:34.731856    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:41:34.731981    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:41:34.747937    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:41:34.748037    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:41:34.760686    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:41:34.760764    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:41:34.772119    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:41:34.772187    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:41:34.782478    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:41:34.782541    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:41:34.792810    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:41:34.792875    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:41:34.807198    4112 logs.go:276] 0 containers: []
	W0307 14:41:34.807211    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:41:34.807271    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:41:34.817720    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:41:34.817741    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:41:34.817747    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:41:34.830139    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:41:34.830151    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:41:34.841557    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:41:34.841568    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:41:34.846270    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:41:34.846278    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:41:34.863793    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:41:34.863804    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:41:34.874889    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:41:34.874900    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:41:34.888564    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:41:34.888576    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:41:34.900289    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:41:34.900300    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:41:34.912265    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:41:34.912277    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:41:34.937865    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:41:34.937875    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:41:34.974518    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:41:34.974525    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:41:34.989533    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:41:34.989544    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:41:35.001199    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:41:35.001211    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:41:35.018942    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:41:35.018955    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:41:35.030191    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:41:35.030202    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:41:35.065775    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:41:35.065785    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:41:35.089048    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:41:35.089059    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:41:37.612620    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:41:42.614668    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:41:42.614808    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:41:42.631165    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:41:42.631235    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:41:42.642336    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:41:42.642405    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:41:42.652669    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:41:42.652736    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:41:42.662752    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:41:42.662820    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:41:42.672928    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:41:42.672988    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:41:42.683751    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:41:42.683823    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:41:42.693906    4112 logs.go:276] 0 containers: []
	W0307 14:41:42.693919    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:41:42.693972    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:41:42.705057    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:41:42.705076    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:41:42.705081    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:41:42.717167    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:41:42.717177    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:41:42.721888    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:41:42.721896    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:41:42.740525    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:41:42.740540    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:41:42.758387    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:41:42.758397    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:41:42.770568    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:41:42.770579    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:41:42.784502    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:41:42.784516    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:41:42.803484    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:41:42.803499    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:41:42.820611    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:41:42.820622    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:41:42.832006    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:41:42.832018    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:41:42.846892    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:41:42.846903    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:41:42.859119    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:41:42.859133    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:41:42.883896    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:41:42.883906    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:41:42.921708    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:41:42.921718    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:41:42.957914    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:41:42.957926    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:41:42.971935    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:41:42.971950    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:41:42.983224    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:41:42.983236    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:41:45.495109    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:41:50.497455    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:41:50.497807    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:41:50.532508    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:41:50.532639    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:41:50.553403    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:41:50.553497    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:41:50.569168    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:41:50.569250    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:41:50.579929    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:41:50.580004    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:41:50.596138    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:41:50.596209    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:41:50.606788    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:41:50.606865    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:41:50.617137    4112 logs.go:276] 0 containers: []
	W0307 14:41:50.617159    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:41:50.617221    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:41:50.628119    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:41:50.628138    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:41:50.628144    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:41:50.665366    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:41:50.665376    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:41:50.669957    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:41:50.669965    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:41:50.688884    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:41:50.688895    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:41:50.707174    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:41:50.707183    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:41:50.718638    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:41:50.718650    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:41:50.730189    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:41:50.730204    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:41:50.764194    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:41:50.764203    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:41:50.778599    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:41:50.778611    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:41:50.803341    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:41:50.803354    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:41:50.820302    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:41:50.820312    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:41:50.843791    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:41:50.843801    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:41:50.854864    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:41:50.854877    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:41:50.865802    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:41:50.865817    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:41:50.879732    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:41:50.879744    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:41:50.890840    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:41:50.890852    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:41:50.902795    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:41:50.902807    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:41:53.416604    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:41:58.419168    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:41:58.419327    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:41:58.431129    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:41:58.431211    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:41:58.442339    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:41:58.442417    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:41:58.453066    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:41:58.453137    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:41:58.474251    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:41:58.474333    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:41:58.485649    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:41:58.485723    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:41:58.497080    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:41:58.497153    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:41:58.511063    4112 logs.go:276] 0 containers: []
	W0307 14:41:58.511078    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:41:58.511148    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:41:58.524199    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:41:58.524218    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:41:58.524224    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:41:58.561302    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:41:58.561319    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:41:58.583618    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:41:58.583633    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:41:58.597207    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:41:58.597219    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:41:58.612064    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:41:58.612078    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:41:58.632864    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:41:58.632881    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:41:58.645091    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:41:58.645104    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:41:58.657276    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:41:58.657288    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:41:58.670248    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:41:58.670260    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:41:58.674787    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:41:58.674808    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:41:58.689846    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:41:58.689863    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:41:58.711030    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:41:58.711040    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:41:58.722907    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:41:58.722919    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:41:58.735285    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:41:58.735296    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:41:58.753720    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:41:58.753732    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:41:58.779330    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:41:58.779348    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:41:58.819827    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:41:58.819843    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:42:01.341584    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:06.343965    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:06.344383    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:42:06.386600    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:42:06.386741    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:42:06.409247    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:42:06.409353    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:42:06.424031    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:42:06.424115    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:42:06.442017    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:42:06.442096    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:42:06.453131    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:42:06.453202    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:42:06.463529    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:42:06.463601    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:42:06.473754    4112 logs.go:276] 0 containers: []
	W0307 14:42:06.473765    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:42:06.473823    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:42:06.484392    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:42:06.484412    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:42:06.484417    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:42:06.498110    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:42:06.498122    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:42:06.516450    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:42:06.516464    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:42:06.529390    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:42:06.529399    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:42:06.541431    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:42:06.541442    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:42:06.552675    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:42:06.552686    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:42:06.588772    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:42:06.588780    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:42:06.607770    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:42:06.607781    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:42:06.619368    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:42:06.619380    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:42:06.635723    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:42:06.635737    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:42:06.640445    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:42:06.640455    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:42:06.654597    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:42:06.654609    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:42:06.669254    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:42:06.669265    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:42:06.687045    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:42:06.687056    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:42:06.700406    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:42:06.700418    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:42:06.713063    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:42:06.713075    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:42:06.749039    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:42:06.749052    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:42:09.275160    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:14.277584    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:14.277872    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:42:14.306403    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:42:14.306533    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:42:14.329408    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:42:14.329495    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:42:14.343576    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:42:14.343650    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:42:14.355107    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:42:14.355185    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:42:14.366176    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:42:14.366246    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:42:14.377010    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:42:14.377081    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:42:14.387303    4112 logs.go:276] 0 containers: []
	W0307 14:42:14.387313    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:42:14.387371    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:42:14.397449    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:42:14.397467    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:42:14.397473    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:42:14.436754    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:42:14.436769    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:42:14.451618    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:42:14.451632    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:42:14.471934    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:42:14.471947    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:42:14.489352    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:42:14.489362    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:42:14.501230    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:42:14.501241    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:42:14.513896    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:42:14.513909    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:42:14.532916    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:42:14.532928    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:42:14.537667    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:42:14.537674    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:42:14.575928    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:42:14.575941    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:42:14.590050    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:42:14.590063    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:42:14.609574    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:42:14.609586    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:42:14.633279    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:42:14.633291    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:42:14.650506    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:42:14.650517    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:42:14.661889    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:42:14.661901    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:42:14.674188    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:42:14.674199    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:42:14.685954    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:42:14.685968    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:42:17.203419    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:22.205210    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:22.205383    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:42:22.217130    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:42:22.217208    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:42:22.227892    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:42:22.227987    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:42:22.238918    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:42:22.238981    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:42:22.249380    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:42:22.249442    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:42:22.260317    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:42:22.260380    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:42:22.273695    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:42:22.273759    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:42:22.284067    4112 logs.go:276] 0 containers: []
	W0307 14:42:22.284079    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:42:22.284134    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:42:22.295203    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:42:22.295221    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:42:22.295228    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:42:22.330466    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:42:22.330478    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:42:22.342206    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:42:22.342216    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:42:22.346565    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:42:22.346572    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:42:22.360206    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:42:22.360218    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:42:22.377360    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:42:22.377369    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:42:22.389508    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:42:22.389520    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:42:22.413020    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:42:22.413032    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:42:22.450111    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:42:22.450119    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:42:22.464590    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:42:22.464600    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:42:22.476194    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:42:22.476207    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:42:22.500429    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:42:22.500440    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:42:22.512863    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:42:22.512873    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:42:22.533609    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:42:22.533620    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:42:22.556775    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:42:22.556786    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:42:22.568282    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:42:22.568293    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:42:22.583668    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:42:22.583680    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:42:25.103467    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:30.105977    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:30.106386    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:42:30.139620    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:42:30.139757    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:42:30.158714    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:42:30.158812    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:42:30.173452    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:42:30.173533    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:42:30.189489    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:42:30.189561    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:42:30.200456    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:42:30.200524    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:42:30.214793    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:42:30.214864    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:42:30.225007    4112 logs.go:276] 0 containers: []
	W0307 14:42:30.225018    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:42:30.225074    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:42:30.235707    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:42:30.235727    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:42:30.235733    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:42:30.240458    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:42:30.240464    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:42:30.280064    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:42:30.280077    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:42:30.292567    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:42:30.292578    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:42:30.304830    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:42:30.304842    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:42:30.319431    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:42:30.319442    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:42:30.331802    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:42:30.331816    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:42:30.349776    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:42:30.349787    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:42:30.364820    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:42:30.364831    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:42:30.376642    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:42:30.376654    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:42:30.388855    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:42:30.388867    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:42:30.413512    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:42:30.413523    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:42:30.451112    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:42:30.451120    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:42:30.470431    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:42:30.470444    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:42:30.485190    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:42:30.485200    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:42:30.497112    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:42:30.497125    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:42:30.516704    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:42:30.516715    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:42:33.037930    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:38.038501    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:38.038586    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:42:38.050934    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:42:38.051009    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:42:38.063486    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:42:38.063560    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:42:38.079547    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:42:38.079620    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:42:38.095902    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:42:38.095973    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:42:38.107536    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:42:38.107610    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:42:38.119003    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:42:38.119082    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:42:38.130573    4112 logs.go:276] 0 containers: []
	W0307 14:42:38.130584    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:42:38.130648    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:42:38.141987    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:42:38.142009    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:42:38.142016    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:42:38.182792    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:42:38.182805    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:42:38.195450    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:42:38.195461    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:42:38.209073    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:42:38.209087    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:42:38.224480    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:42:38.224492    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:42:38.248499    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:42:38.248514    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:42:38.253786    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:42:38.253798    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:42:38.273920    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:42:38.273933    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:42:38.287235    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:42:38.287246    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:42:38.304050    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:42:38.304062    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:42:38.318845    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:42:38.318857    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:42:38.342813    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:42:38.342826    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:42:38.381699    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:42:38.381714    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:42:38.401498    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:42:38.401513    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:42:38.415433    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:42:38.415444    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:42:38.431515    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:42:38.431531    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:42:38.453004    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:42:38.453017    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:42:40.968620    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:45.970906    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:45.971101    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:42:45.993969    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:42:45.994073    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:42:46.008715    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:42:46.008797    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:42:46.026439    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:42:46.026542    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:42:46.037221    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:42:46.037295    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:42:46.048321    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:42:46.048389    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:42:46.060117    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:42:46.060180    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:42:46.071555    4112 logs.go:276] 0 containers: []
	W0307 14:42:46.071568    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:42:46.071635    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:42:46.084178    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:42:46.084196    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:42:46.084202    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:42:46.124273    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:42:46.124294    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:42:46.131211    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:42:46.131226    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:42:46.153688    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:42:46.153714    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:42:46.173039    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:42:46.173056    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:42:46.197773    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:42:46.197787    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:42:46.211980    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:42:46.211991    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:42:46.233711    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:42:46.233721    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:42:46.246718    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:42:46.246731    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:42:46.286938    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:42:46.286958    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:42:46.300682    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:42:46.300695    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:42:46.314093    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:42:46.314107    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:42:46.327508    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:42:46.327524    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:42:46.340854    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:42:46.340870    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:42:46.357372    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:42:46.357387    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:42:46.370572    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:42:46.370588    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:42:46.386620    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:42:46.386634    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:42:48.901951    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:53.904208    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:53.904426    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:42:53.925769    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:42:53.925892    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:42:53.941669    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:42:53.941758    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:42:53.953834    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:42:53.953909    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:42:53.964170    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:42:53.964238    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:42:53.974528    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:42:53.974593    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:42:53.985124    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:42:53.985191    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:42:53.995975    4112 logs.go:276] 0 containers: []
	W0307 14:42:53.995986    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:42:53.996049    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:42:54.006150    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:42:54.006170    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:42:54.006176    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:42:54.025859    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:42:54.025873    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:42:54.038001    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:42:54.038011    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:42:54.052900    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:42:54.052913    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:42:54.069787    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:42:54.069797    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:42:54.081524    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:42:54.081535    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:42:54.086310    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:42:54.086317    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:42:54.100335    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:42:54.100344    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:42:54.111449    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:42:54.111462    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:42:54.146476    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:42:54.146484    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:42:54.163054    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:42:54.163066    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:42:54.180058    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:42:54.180070    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:42:54.199225    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:42:54.199235    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:42:54.223617    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:42:54.223626    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:42:54.257696    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:42:54.257708    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:42:54.269920    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:42:54.269932    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:42:54.281824    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:42:54.281836    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:42:56.802338    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:01.803684    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:01.803834    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:01.819454    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:43:01.819526    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:01.830094    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:43:01.830165    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:01.840484    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:43:01.840555    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:01.851338    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:43:01.851401    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:01.861972    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:43:01.862038    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:01.872617    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:43:01.872688    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:01.882444    4112 logs.go:276] 0 containers: []
	W0307 14:43:01.882461    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:01.882527    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:01.892961    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:43:01.892978    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:43:01.892984    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:43:01.906970    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:43:01.906983    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:43:01.922228    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:43:01.922239    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:43:01.943720    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:43:01.943733    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:43:01.955054    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:01.955066    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:01.959472    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:43:01.959482    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:43:01.978622    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:43:01.978633    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:43:01.990075    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:43:01.990091    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:43:02.001599    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:43:02.001613    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:43:02.039369    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:43:02.039385    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:43:02.062560    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:43:02.062576    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:43:02.075139    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:43:02.075152    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:43:02.087983    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:02.087994    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:02.126373    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:43:02.126393    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:43:02.140485    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:43:02.140497    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:02.152795    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:02.152809    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:02.188622    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:02.188634    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:04.716427    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:09.718704    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:09.719148    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:09.735672    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:43:09.735755    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:09.746429    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:43:09.746491    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:09.766722    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:43:09.766797    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:09.779059    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:43:09.779138    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:09.789120    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:43:09.789194    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:09.800038    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:43:09.800103    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:09.809850    4112 logs.go:276] 0 containers: []
	W0307 14:43:09.809863    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:09.809923    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:09.820660    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:43:09.820675    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:43:09.820681    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:43:09.832198    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:43:09.832209    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:43:09.845930    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:43:09.845943    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:43:09.864586    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:43:09.864596    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:43:09.882703    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:43:09.882721    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:09.894977    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:09.894989    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:09.899366    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:43:09.899373    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:43:09.913551    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:09.913561    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:09.936061    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:43:09.936069    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:43:09.950695    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:43:09.950706    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:43:09.963125    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:43:09.963136    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:43:09.976013    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:43:09.976024    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:43:09.987102    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:09.987114    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:10.026696    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:43:10.026708    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:43:10.045978    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:43:10.045990    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:43:10.057636    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:10.057647    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:10.094779    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:43:10.094788    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:43:12.612185    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:17.614742    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:17.615167    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:17.655937    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:43:17.656068    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:17.677257    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:43:17.677385    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:17.692454    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:43:17.692542    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:17.705354    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:43:17.705429    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:17.716018    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:43:17.716088    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:17.726356    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:43:17.726428    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:17.736395    4112 logs.go:276] 0 containers: []
	W0307 14:43:17.736406    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:17.736464    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:17.747069    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:43:17.747088    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:17.747094    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:17.782162    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:17.782172    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:17.821454    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:43:17.821468    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:43:17.833069    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:43:17.833079    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:43:17.844432    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:43:17.844443    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:43:17.858822    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:17.858835    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:17.863344    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:43:17.863359    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:43:17.880991    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:43:17.881003    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:43:17.892687    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:43:17.892699    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:43:17.909563    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:43:17.909578    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:43:17.921327    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:43:17.921337    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:43:17.935097    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:17.935108    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:17.957132    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:43:17.957142    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:43:17.971319    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:43:17.971330    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:43:17.991656    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:43:17.991668    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:43:18.009468    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:43:18.009477    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:43:18.021348    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:43:18.021358    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:20.536488    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:25.539169    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:25.539562    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:25.572692    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:43:25.572830    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:25.592605    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:43:25.592706    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:25.606856    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:43:25.606931    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:25.621931    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:43:25.622006    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:25.632819    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:43:25.632891    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:25.643866    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:43:25.643930    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:25.653996    4112 logs.go:276] 0 containers: []
	W0307 14:43:25.654010    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:25.654070    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:25.664353    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:43:25.664370    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:25.664376    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:25.703320    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:43:25.703333    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:43:25.721086    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:43:25.721096    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:43:25.732341    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:43:25.732351    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:43:25.744113    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:43:25.744125    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:43:25.755848    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:43:25.755859    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:43:25.767202    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:43:25.767212    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:25.778750    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:43:25.778761    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:43:25.793222    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:43:25.793233    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:43:25.805251    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:43:25.805265    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:43:25.821113    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:43:25.821126    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:43:25.838482    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:25.838492    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:25.875961    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:25.875970    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:25.880676    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:43:25.880684    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:43:25.894634    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:43:25.894645    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:43:25.914340    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:43:25.914353    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:43:25.928890    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:25.928902    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:28.456114    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:33.457192    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:33.457346    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:33.469538    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:43:33.469618    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:33.481236    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:43:33.481314    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:33.492832    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:43:33.492899    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:33.503542    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:43:33.503609    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:33.514008    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:43:33.514071    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:33.524917    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:43:33.524977    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:33.534994    4112 logs.go:276] 0 containers: []
	W0307 14:43:33.535007    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:33.535071    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:33.545295    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:43:33.545314    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:43:33.545321    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:43:33.557326    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:43:33.557342    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:43:33.569558    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:43:33.569569    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:43:33.587251    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:43:33.587261    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:43:33.598426    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:33.598437    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:33.621409    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:43:33.621417    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:43:33.638713    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:43:33.638733    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:43:33.654475    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:43:33.654486    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:43:33.665886    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:33.665902    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:33.702028    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:43:33.702045    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:43:33.718802    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:33.718812    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:33.723162    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:43:33.723169    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:43:33.741792    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:43:33.741803    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:43:33.755772    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:43:33.755784    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:43:33.770490    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:43:33.770505    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:43:33.782064    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:43:33.782078    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:33.794458    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:33.794471    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:36.332353    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:41.333837    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:41.333932    4112 kubeadm.go:591] duration metric: took 4m3.987681875s to restartPrimaryControlPlane
	W0307 14:43:41.334034    4112 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0307 14:43:41.334080    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0307 14:43:42.349383    4112 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.015286042s)
	I0307 14:43:42.349453    4112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 14:43:42.354610    4112 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 14:43:42.357568    4112 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 14:43:42.360367    4112 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 14:43:42.360372    4112 kubeadm.go:156] found existing configuration files:
	
	I0307 14:43:42.360397    4112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/admin.conf
	I0307 14:43:42.362684    4112 kubeadm.go:162] "https://control-plane.minikube.internal:50298" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 14:43:42.362710    4112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 14:43:42.365724    4112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/kubelet.conf
	I0307 14:43:42.368841    4112 kubeadm.go:162] "https://control-plane.minikube.internal:50298" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 14:43:42.368865    4112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 14:43:42.371579    4112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/controller-manager.conf
	I0307 14:43:42.374238    4112 kubeadm.go:162] "https://control-plane.minikube.internal:50298" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 14:43:42.374271    4112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 14:43:42.377781    4112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/scheduler.conf
	I0307 14:43:42.380844    4112 kubeadm.go:162] "https://control-plane.minikube.internal:50298" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 14:43:42.380875    4112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 14:43:42.383577    4112 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 14:43:42.401271    4112 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0307 14:43:42.401330    4112 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 14:43:42.448806    4112 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 14:43:42.448861    4112 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 14:43:42.448910    4112 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 14:43:42.501390    4112 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 14:43:42.506290    4112 out.go:204]   - Generating certificates and keys ...
	I0307 14:43:42.506328    4112 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 14:43:42.506364    4112 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 14:43:42.506408    4112 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 14:43:42.506443    4112 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0307 14:43:42.506483    4112 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 14:43:42.506511    4112 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0307 14:43:42.506548    4112 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0307 14:43:42.506588    4112 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0307 14:43:42.506631    4112 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 14:43:42.506668    4112 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 14:43:42.506686    4112 kubeadm.go:309] [certs] Using the existing "sa" key
	I0307 14:43:42.506712    4112 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 14:43:42.580402    4112 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 14:43:42.647187    4112 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 14:43:42.714404    4112 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 14:43:42.798059    4112 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 14:43:42.825036    4112 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 14:43:42.825528    4112 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 14:43:42.825550    4112 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 14:43:42.922424    4112 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 14:43:42.926547    4112 out.go:204]   - Booting up control plane ...
	I0307 14:43:42.926596    4112 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 14:43:42.926648    4112 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 14:43:42.926680    4112 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 14:43:42.926716    4112 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 14:43:42.926862    4112 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 14:43:47.426352    4112 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.503999 seconds
	I0307 14:43:47.426443    4112 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 14:43:47.431704    4112 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 14:43:47.956976    4112 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 14:43:47.957220    4112 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-974000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 14:43:48.459958    4112 kubeadm.go:309] [bootstrap-token] Using token: r77cp9.ivp7f2h88tolpeh7
	I0307 14:43:48.466661    4112 out.go:204]   - Configuring RBAC rules ...
	I0307 14:43:48.466720    4112 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 14:43:48.466765    4112 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 14:43:48.468724    4112 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 14:43:48.470265    4112 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 14:43:48.470925    4112 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 14:43:48.471935    4112 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 14:43:48.475319    4112 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 14:43:48.636379    4112 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 14:43:48.864243    4112 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 14:43:48.864686    4112 kubeadm.go:309] 
	I0307 14:43:48.864714    4112 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 14:43:48.864717    4112 kubeadm.go:309] 
	I0307 14:43:48.864751    4112 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 14:43:48.864756    4112 kubeadm.go:309] 
	I0307 14:43:48.864767    4112 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 14:43:48.864835    4112 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 14:43:48.864863    4112 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 14:43:48.864883    4112 kubeadm.go:309] 
	I0307 14:43:48.864913    4112 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 14:43:48.864919    4112 kubeadm.go:309] 
	I0307 14:43:48.864971    4112 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 14:43:48.864991    4112 kubeadm.go:309] 
	I0307 14:43:48.865089    4112 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 14:43:48.865126    4112 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 14:43:48.865185    4112 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 14:43:48.865190    4112 kubeadm.go:309] 
	I0307 14:43:48.865239    4112 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 14:43:48.865281    4112 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 14:43:48.865284    4112 kubeadm.go:309] 
	I0307 14:43:48.865318    4112 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token r77cp9.ivp7f2h88tolpeh7 \
	I0307 14:43:48.865380    4112 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dc223c8f61ede9166cd6af749ad18625b70792e87215b933ce92e49a8a2f5e5b \
	I0307 14:43:48.865404    4112 kubeadm.go:309] 	--control-plane 
	I0307 14:43:48.865413    4112 kubeadm.go:309] 
	I0307 14:43:48.865456    4112 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 14:43:48.865459    4112 kubeadm.go:309] 
	I0307 14:43:48.865500    4112 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token r77cp9.ivp7f2h88tolpeh7 \
	I0307 14:43:48.865555    4112 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dc223c8f61ede9166cd6af749ad18625b70792e87215b933ce92e49a8a2f5e5b 
	I0307 14:43:48.865610    4112 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 14:43:48.865614    4112 cni.go:84] Creating CNI manager for ""
	I0307 14:43:48.865622    4112 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:43:48.868347    4112 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 14:43:48.874375    4112 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 14:43:48.877686    4112 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 14:43:48.882843    4112 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 14:43:48.882899    4112 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 14:43:48.882902    4112 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-974000 minikube.k8s.io/updated_at=2024_03_07T14_43_48_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3e3656b8cff33aafa60dd2a07a4b34bce666a6a6 minikube.k8s.io/name=running-upgrade-974000 minikube.k8s.io/primary=true
	I0307 14:43:48.924513    4112 kubeadm.go:1106] duration metric: took 41.650291ms to wait for elevateKubeSystemPrivileges
	I0307 14:43:48.924532    4112 ops.go:34] apiserver oom_adj: -16
	W0307 14:43:48.924691    4112 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 14:43:48.924698    4112 kubeadm.go:393] duration metric: took 4m11.592726167s to StartCluster
	I0307 14:43:48.924707    4112 settings.go:142] acquiring lock: {Name:mke1b21879aa30c53aba22e595b1eacff0880bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:43:48.924826    4112 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:43:48.925246    4112 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/kubeconfig: {Name:mk8faa4add6483df163f8e33c2463984a39a544c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:43:48.925425    4112 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:43:48.929305    4112 out.go:177] * Verifying Kubernetes components...
	I0307 14:43:48.925517    4112 config.go:182] Loaded profile config "running-upgrade-974000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:43:48.925482    4112 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 14:43:48.937169    4112 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-974000"
	I0307 14:43:48.937184    4112 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-974000"
	W0307 14:43:48.937188    4112 addons.go:243] addon storage-provisioner should already be in state true
	I0307 14:43:48.937206    4112 host.go:66] Checking if "running-upgrade-974000" exists ...
	I0307 14:43:48.937220    4112 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-974000"
	I0307 14:43:48.937237    4112 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-974000"
	I0307 14:43:48.937254    4112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:43:48.938385    4112 kapi.go:59] client config for running-upgrade-974000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/client.key", CAFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1021df6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 14:43:48.938914    4112 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-974000"
	W0307 14:43:48.938919    4112 addons.go:243] addon default-storageclass should already be in state true
	I0307 14:43:48.938927    4112 host.go:66] Checking if "running-upgrade-974000" exists ...
	I0307 14:43:48.943256    4112 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:43:48.947340    4112 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 14:43:48.947346    4112 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 14:43:48.947353    4112 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/running-upgrade-974000/id_rsa Username:docker}
	I0307 14:43:48.948141    4112 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 14:43:48.948146    4112 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 14:43:48.948149    4112 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/running-upgrade-974000/id_rsa Username:docker}
	I0307 14:43:49.020992    4112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 14:43:49.026311    4112 api_server.go:52] waiting for apiserver process to appear ...
	I0307 14:43:49.026362    4112 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 14:43:49.030987    4112 api_server.go:72] duration metric: took 105.549709ms to wait for apiserver process to appear ...
	I0307 14:43:49.030995    4112 api_server.go:88] waiting for apiserver healthz status ...
	I0307 14:43:49.031002    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:49.037589    4112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 14:43:49.040881    4112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 14:43:54.033123    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:54.033168    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:59.033533    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:59.033582    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:04.033920    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:04.033950    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:09.034382    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:09.034422    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:14.035077    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:14.035176    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:19.035995    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:19.036026    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0307 14:44:19.367531    4112 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0307 14:44:19.370802    4112 out.go:177] * Enabled addons: storage-provisioner
	I0307 14:44:19.377832    4112 addons.go:505] duration metric: took 30.452317833s for enable addons: enabled=[storage-provisioner]
	I0307 14:44:24.037038    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:24.037077    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:29.038377    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:29.038419    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:34.038967    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:34.039022    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:39.041156    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:39.041221    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:44.043535    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:44.043579    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:49.045847    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:49.045979    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:49.057557    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:44:49.057639    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:49.067921    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:44:49.067991    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:49.078280    4112 logs.go:276] 2 containers: [cf3188fb5d73 cb2252cd6914]
	I0307 14:44:49.078357    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:49.088895    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:44:49.088970    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:49.113636    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:44:49.113718    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:49.125784    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:44:49.125853    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:49.135863    4112 logs.go:276] 0 containers: []
	W0307 14:44:49.135876    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:49.135942    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:49.146961    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:44:49.146976    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:49.146982    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:49.151914    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:44:49.151920    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:44:49.163319    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:44:49.163329    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:44:49.178238    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:49.178251    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:44:49.212318    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:44:49.212410    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:44:49.213540    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:49.213548    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:49.250242    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:44:49.250254    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:44:49.265050    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:44:49.265062    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:44:49.279289    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:44:49.279304    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:44:49.290808    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:44:49.290819    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:44:49.305431    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:44:49.305441    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:44:49.323534    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:44:49.323545    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:44:49.335032    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:49.335042    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:49.358462    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:44:49.358471    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:49.370258    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:44:49.370269    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:44:49.370298    4112 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 14:44:49.370304    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:44:49.370308    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:44:49.370312    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:44:49.370315    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:44:59.374506    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:04.376817    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:04.377056    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:04.396052    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:45:04.396149    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:04.410122    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:45:04.410187    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:04.422157    4112 logs.go:276] 2 containers: [cf3188fb5d73 cb2252cd6914]
	I0307 14:45:04.422228    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:04.432683    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:45:04.432758    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:04.442738    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:45:04.442810    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:04.453219    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:45:04.453285    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:04.466322    4112 logs.go:276] 0 containers: []
	W0307 14:45:04.466333    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:04.466392    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:04.476571    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:45:04.476588    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:45:04.476593    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:04.490615    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:04.490629    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:45:04.524500    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:04.524592    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:04.525747    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:04.525754    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:04.530602    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:04.530608    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:04.566782    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:45:04.566793    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:45:04.587630    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:45:04.587640    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:45:04.600228    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:45:04.600239    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:45:04.613983    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:04.613992    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:04.639527    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:45:04.639536    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:45:04.657497    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:45:04.657508    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:45:04.668911    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:45:04.668921    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:45:04.683603    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:45:04.683614    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:45:04.695203    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:45:04.695214    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:45:04.712642    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:04.712651    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:45:04.712678    4112 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 14:45:04.712682    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:04.712685    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:04.712689    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:04.712695    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:45:14.715524    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:19.718183    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:19.718421    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:19.740687    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:45:19.740791    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:19.755554    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:45:19.755638    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:19.769188    4112 logs.go:276] 2 containers: [cf3188fb5d73 cb2252cd6914]
	I0307 14:45:19.769256    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:19.780504    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:45:19.780574    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:19.790972    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:45:19.791038    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:19.801727    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:45:19.801799    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:19.811801    4112 logs.go:276] 0 containers: []
	W0307 14:45:19.811817    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:19.811877    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:19.823024    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:45:19.823040    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:19.823044    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:45:19.857790    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:19.857882    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:19.858967    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:19.858972    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:19.894525    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:45:19.894537    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:45:19.908872    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:45:19.908883    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:45:19.920260    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:19.920271    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:19.943192    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:19.943201    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:19.947412    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:45:19.947417    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:45:19.964770    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:45:19.964783    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:45:19.976904    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:45:19.976914    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:45:19.991253    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:45:19.991262    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:45:20.002990    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:45:20.002999    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:45:20.020493    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:45:20.020504    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:45:20.031486    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:45:20.031498    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:20.042765    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:20.042790    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:45:20.042818    4112 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 14:45:20.042824    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:20.042827    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:20.042832    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:20.042835    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:45:30.045026    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:35.045349    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:35.045499    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:35.063047    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:45:35.063131    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:35.075896    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:45:35.075969    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:35.087327    4112 logs.go:276] 2 containers: [cf3188fb5d73 cb2252cd6914]
	I0307 14:45:35.087396    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:35.099519    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:45:35.099580    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:35.114117    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:45:35.114190    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:35.125358    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:45:35.125425    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:35.135724    4112 logs.go:276] 0 containers: []
	W0307 14:45:35.135735    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:35.135800    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:35.149373    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:45:35.149387    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:35.149392    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:35.172685    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:45:35.172695    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:35.183875    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:35.183886    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:45:35.217521    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:35.217617    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:35.218770    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:35.218777    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:35.254012    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:45:35.254025    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:45:35.265485    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:45:35.265498    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:45:35.279850    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:45:35.279863    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:45:35.291586    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:45:35.291600    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:45:35.309526    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:35.309538    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:35.314109    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:45:35.314116    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:45:35.328712    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:45:35.328723    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:45:35.342820    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:45:35.342829    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:45:35.354362    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:45:35.354375    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:45:35.365954    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:35.365963    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:45:35.365987    4112 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 14:45:35.365991    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:35.365995    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:35.365999    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:35.366001    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:45:45.368044    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:50.370441    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:50.370980    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:50.409388    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:45:50.409532    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:50.431594    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:45:50.431715    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:50.447276    4112 logs.go:276] 2 containers: [cf3188fb5d73 cb2252cd6914]
	I0307 14:45:50.447365    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:50.460455    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:45:50.460533    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:50.471677    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:45:50.471751    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:50.482305    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:45:50.482368    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:50.492703    4112 logs.go:276] 0 containers: []
	W0307 14:45:50.492715    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:50.492774    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:50.503671    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:45:50.503685    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:45:50.503691    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:45:50.518687    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:45:50.518697    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:45:50.536726    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:45:50.536736    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:45:50.548915    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:50.548928    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:50.573748    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:45:50.573761    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:50.589479    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:45:50.589495    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:45:50.601529    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:50.601542    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:50.606071    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:50.606077    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:50.642488    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:45:50.642499    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:45:50.661219    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:45:50.661231    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:45:50.674992    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:45:50.675006    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:45:50.686322    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:45:50.686336    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:45:50.697797    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:50.697811    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:45:50.730292    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:50.730389    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:50.731544    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:50.731554    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:45:50.731581    4112 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 14:45:50.731588    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:50.731592    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:50.731598    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:50.731601    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:46:00.735819    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:05.738085    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:05.738273    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:05.758734    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:46:05.758853    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:05.774120    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:46:05.774197    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:05.786408    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:46:05.786486    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:05.797311    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:46:05.797374    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:05.807153    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:46:05.807221    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:05.817999    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:46:05.818067    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:05.828462    4112 logs.go:276] 0 containers: []
	W0307 14:46:05.828475    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:05.828537    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:05.838938    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:46:05.838957    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:05.838962    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:05.879132    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:46:05.879143    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:46:05.898091    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:46:05.898106    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:46:05.911526    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:46:05.911538    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:46:05.929243    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:05.929255    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:05.952712    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:05.952723    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:46:05.985804    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:05.985897    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:05.987049    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:05.987057    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:05.992077    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:46:05.992085    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:46:06.006898    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:46:06.006908    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:46:06.019142    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:46:06.019153    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:46:06.033540    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:46:06.033551    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:46:06.045578    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:46:06.045590    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:06.057525    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:46:06.057535    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:46:06.072326    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:46:06.072339    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:46:06.083709    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:46:06.083719    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:46:06.094913    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:06.094923    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:46:06.094951    4112 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 14:46:06.094957    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:06.094960    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:06.094964    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:06.094967    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:46:16.099147    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:21.101520    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:21.101710    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:21.134589    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:46:21.134694    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:21.151868    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:46:21.151945    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:21.164907    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:46:21.164988    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:21.176076    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:46:21.176143    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:21.186730    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:46:21.186808    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:21.197765    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:46:21.197837    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:21.208434    4112 logs.go:276] 0 containers: []
	W0307 14:46:21.208444    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:21.208495    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:21.218635    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:46:21.218657    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:46:21.218662    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:21.230939    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:21.230952    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:21.236181    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:46:21.236193    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:46:21.250797    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:46:21.250807    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:46:21.262346    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:46:21.262360    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:46:21.274219    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:46:21.274231    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:46:21.291753    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:46:21.291764    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:46:21.303418    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:21.303429    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:21.329425    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:21.329434    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:21.365104    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:46:21.365120    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:46:21.377305    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:46:21.377318    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:46:21.388923    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:46:21.388935    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:46:21.403716    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:46:21.403728    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:46:21.416375    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:21.416386    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:46:21.449778    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:21.449871    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:21.450959    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:46:21.450966    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:46:21.466308    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:21.466319    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:46:21.466348    4112 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 14:46:21.466354    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:21.466358    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:21.466361    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:21.466366    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:46:31.468690    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:36.470985    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:36.471129    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:36.482543    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:46:36.482615    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:36.493094    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:46:36.493179    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:36.503276    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:46:36.503347    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:36.513732    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:46:36.513804    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:36.523881    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:46:36.523947    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:36.534889    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:46:36.534961    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:36.545257    4112 logs.go:276] 0 containers: []
	W0307 14:46:36.545269    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:36.545330    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:36.556322    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:46:36.556339    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:46:36.556345    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:46:36.573621    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:46:36.573632    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:46:36.585084    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:36.585095    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:36.619627    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:46:36.619642    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:46:36.634247    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:46:36.634259    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:46:36.646030    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:46:36.646041    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:46:36.657366    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:46:36.657378    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:46:36.672578    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:46:36.672589    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:46:36.684464    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:36.684477    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:36.707657    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:46:36.707664    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:36.721096    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:46:36.721109    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:46:36.738617    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:46:36.738630    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:46:36.750446    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:36.750458    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:46:36.784012    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:36.784105    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:36.785186    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:36.785191    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:36.789796    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:46:36.789805    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:46:36.801165    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:36.801178    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:46:36.801202    4112 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 14:46:36.801207    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:36.801210    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:36.801214    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:36.801217    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:46:46.805035    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:51.807306    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:51.807414    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:51.819320    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:46:51.819400    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:51.830855    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:46:51.830937    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:51.843505    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:46:51.843588    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:51.855522    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:46:51.855601    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:51.867380    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:46:51.867461    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:51.880389    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:46:51.880472    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:51.892672    4112 logs.go:276] 0 containers: []
	W0307 14:46:51.892685    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:51.892754    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:51.906551    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:46:51.906569    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:46:51.906575    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:46:51.923608    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:46:51.923620    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:46:51.941820    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:46:51.941831    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:46:51.957772    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:51.957784    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:51.997544    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:46:51.997556    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:46:52.010030    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:46:52.010041    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:46:52.048757    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:52.048768    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:52.079551    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:52.079564    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:46:52.115211    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:52.115306    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:52.116459    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:46:52.116471    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:46:52.128292    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:46:52.128305    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:46:52.140707    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:46:52.140719    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:52.152783    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:52.152795    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:52.157173    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:46:52.157183    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:46:52.174646    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:46:52.174657    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:46:52.187550    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:46:52.187562    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:46:52.200957    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:52.200968    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:46:52.200995    4112 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 14:46:52.201000    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:52.201005    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:52.201010    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:52.201015    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:47:02.204760    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:07.206984    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:07.207094    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:47:07.218618    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:47:07.218692    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:47:07.229584    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:47:07.229651    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:47:07.240368    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:47:07.240450    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:47:07.251801    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:47:07.251869    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:47:07.263555    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:47:07.263628    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:47:07.273860    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:47:07.273933    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:47:07.284215    4112 logs.go:276] 0 containers: []
	W0307 14:47:07.284228    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:47:07.284287    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:47:07.297527    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:47:07.297549    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:47:07.297554    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:47:07.311953    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:47:07.311964    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:47:07.323117    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:47:07.323129    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:47:07.334602    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:47:07.334616    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:47:07.350357    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:47:07.350368    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:47:07.363603    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:47:07.363615    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:47:07.368233    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:47:07.368240    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:47:07.402107    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:47:07.402118    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:47:07.417070    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:47:07.417082    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:47:07.442393    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:47:07.442401    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:47:07.477676    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:47:07.477781    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:47:07.478939    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:47:07.478944    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:47:07.493986    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:47:07.493997    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:47:07.511716    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:47:07.511730    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:47:07.526461    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:47:07.526471    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:47:07.542654    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:47:07.542664    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:47:07.554876    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:47:07.554886    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:47:07.554914    4112 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 14:47:07.554918    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:47:07.554921    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:47:07.554925    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:47:07.554927    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:47:17.558532    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:22.560857    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:22.560986    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:47:22.574702    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:47:22.574796    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:47:22.586534    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:47:22.586598    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:47:22.597391    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:47:22.597459    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:47:22.607979    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:47:22.608049    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:47:22.618904    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:47:22.618967    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:47:22.629923    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:47:22.629993    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:47:22.640191    4112 logs.go:276] 0 containers: []
	W0307 14:47:22.640206    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:47:22.640275    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:47:22.653097    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:47:22.653113    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:47:22.653118    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:47:22.668204    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:47:22.668214    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:47:22.679612    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:47:22.679625    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:47:22.684736    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:47:22.684747    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:47:22.721979    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:47:22.721990    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:47:22.735287    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:47:22.735298    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:47:22.747660    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:47:22.747673    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:47:22.759848    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:47:22.759861    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:47:22.771340    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:47:22.771351    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:47:22.789213    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:47:22.789223    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:47:22.813865    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:47:22.813874    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:47:22.848414    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:47:22.848508    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:47:22.849626    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:47:22.849633    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:47:22.865430    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:47:22.865444    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:47:22.880093    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:47:22.880105    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:47:22.893038    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:47:22.893049    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:47:22.908461    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:47:22.908470    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:47:22.908495    4112 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 14:47:22.908499    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:47:22.908518    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:47:22.908526    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:47:22.908530    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:47:32.912689    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:37.914938    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:37.915190    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:47:37.943144    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:47:37.943271    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:47:37.960849    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:47:37.960934    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:47:37.974492    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:47:37.974564    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:47:37.985661    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:47:37.985739    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:47:37.995609    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:47:37.995682    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:47:38.009663    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:47:38.009730    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:47:38.026665    4112 logs.go:276] 0 containers: []
	W0307 14:47:38.026677    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:47:38.026737    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:47:38.037081    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:47:38.037098    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:47:38.037104    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:47:38.051788    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:47:38.051808    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:47:38.063961    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:47:38.063971    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:47:38.087463    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:47:38.087471    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:47:38.120759    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:47:38.120852    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:47:38.122009    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:47:38.122018    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:47:38.137512    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:47:38.137523    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:47:38.149393    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:47:38.149403    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:47:38.153997    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:47:38.154005    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:47:38.168744    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:47:38.168755    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:47:38.180906    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:47:38.180919    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:47:38.218099    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:47:38.218111    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:47:38.230889    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:47:38.230900    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:47:38.249713    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:47:38.249724    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:47:38.261809    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:47:38.261820    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:47:38.275700    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:47:38.275712    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:47:38.288004    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:47:38.288016    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:47:38.288045    4112 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 14:47:38.288049    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:47:38.288083    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	  Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:47:38.288105    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:47:38.288145    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:47:48.291076    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:53.293373    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:53.297973    4112 out.go:177] 
	W0307 14:47:53.301937    4112 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0307 14:47:53.301947    4112 out.go:239] * 
	* 
	W0307 14:47:53.302748    4112 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:47:53.316953    4112 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-974000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-07 14:47:53.390207 -0800 PST m=+3685.884499460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-974000 -n running-upgrade-974000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-974000 -n running-upgrade-974000: exit status 2 (15.678220583s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-974000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-843000          | force-systemd-flag-843000 | jenkins | v1.32.0 | 07 Mar 24 14:37 PST |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-041000              | force-systemd-env-041000  | jenkins | v1.32.0 | 07 Mar 24 14:37 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-041000           | force-systemd-env-041000  | jenkins | v1.32.0 | 07 Mar 24 14:37 PST | 07 Mar 24 14:37 PST |
	| start   | -p docker-flags-655000                | docker-flags-655000       | jenkins | v1.32.0 | 07 Mar 24 14:37 PST |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-843000             | force-systemd-flag-843000 | jenkins | v1.32.0 | 07 Mar 24 14:37 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-843000          | force-systemd-flag-843000 | jenkins | v1.32.0 | 07 Mar 24 14:37 PST | 07 Mar 24 14:37 PST |
	| start   | -p cert-expiration-072000             | cert-expiration-072000    | jenkins | v1.32.0 | 07 Mar 24 14:37 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-655000 ssh               | docker-flags-655000       | jenkins | v1.32.0 | 07 Mar 24 14:37 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-655000 ssh               | docker-flags-655000       | jenkins | v1.32.0 | 07 Mar 24 14:37 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-655000                | docker-flags-655000       | jenkins | v1.32.0 | 07 Mar 24 14:37 PST | 07 Mar 24 14:37 PST |
	| start   | -p cert-options-995000                | cert-options-995000       | jenkins | v1.32.0 | 07 Mar 24 14:37 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-995000 ssh               | cert-options-995000       | jenkins | v1.32.0 | 07 Mar 24 14:37 PST |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-995000 -- sudo        | cert-options-995000       | jenkins | v1.32.0 | 07 Mar 24 14:37 PST |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-995000                | cert-options-995000       | jenkins | v1.32.0 | 07 Mar 24 14:37 PST | 07 Mar 24 14:37 PST |
	| start   | -p running-upgrade-974000             | minikube                  | jenkins | v1.26.0 | 07 Mar 24 14:38 PST | 07 Mar 24 14:39 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-974000             | running-upgrade-974000    | jenkins | v1.32.0 | 07 Mar 24 14:39 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-072000             | cert-expiration-072000    | jenkins | v1.32.0 | 07 Mar 24 14:40 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-072000             | cert-expiration-072000    | jenkins | v1.32.0 | 07 Mar 24 14:40 PST | 07 Mar 24 14:40 PST |
	| start   | -p kubernetes-upgrade-697000          | kubernetes-upgrade-697000 | jenkins | v1.32.0 | 07 Mar 24 14:40 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-697000          | kubernetes-upgrade-697000 | jenkins | v1.32.0 | 07 Mar 24 14:41 PST | 07 Mar 24 14:41 PST |
	| start   | -p kubernetes-upgrade-697000          | kubernetes-upgrade-697000 | jenkins | v1.32.0 | 07 Mar 24 14:41 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-697000          | kubernetes-upgrade-697000 | jenkins | v1.32.0 | 07 Mar 24 14:41 PST | 07 Mar 24 14:41 PST |
	| start   | -p stopped-upgrade-491000             | minikube                  | jenkins | v1.26.0 | 07 Mar 24 14:41 PST | 07 Mar 24 14:42 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-491000 stop           | minikube                  | jenkins | v1.26.0 | 07 Mar 24 14:42 PST | 07 Mar 24 14:42 PST |
	| start   | -p stopped-upgrade-491000             | stopped-upgrade-491000    | jenkins | v1.32.0 | 07 Mar 24 14:42 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 14:42:16
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 14:42:16.306343    4258 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:42:16.306533    4258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:42:16.306537    4258 out.go:304] Setting ErrFile to fd 2...
	I0307 14:42:16.306539    4258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:42:16.306686    4258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:42:16.307795    4258 out.go:298] Setting JSON to false
	I0307 14:42:16.326572    4258 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4293,"bootTime":1709847043,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:42:16.326638    4258 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:42:16.330823    4258 out.go:177] * [stopped-upgrade-491000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:42:16.343737    4258 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:42:16.338794    4258 notify.go:220] Checking for updates...
	I0307 14:42:16.349762    4258 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:42:16.352802    4258 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:42:16.355778    4258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:42:16.362772    4258 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:42:16.369792    4258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:42:16.373051    4258 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:42:16.376751    4258 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 14:42:16.379763    4258 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:42:16.383730    4258 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 14:42:16.389717    4258 start.go:297] selected driver: qemu2
	I0307 14:42:16.389723    4258 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-491000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 14:42:16.389784    4258 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:42:16.392618    4258 cni.go:84] Creating CNI manager for ""
	I0307 14:42:16.392638    4258 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:42:16.392669    4258 start.go:340] cluster config:
	{Name:stopped-upgrade-491000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 14:42:16.392729    4258 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:42:16.400786    4258 out.go:177] * Starting "stopped-upgrade-491000" primary control-plane node in "stopped-upgrade-491000" cluster
	I0307 14:42:16.404679    4258 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 14:42:16.404702    4258 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0307 14:42:16.404711    4258 cache.go:56] Caching tarball of preloaded images
	I0307 14:42:16.404771    4258 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:42:16.404778    4258 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0307 14:42:16.404851    4258 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/config.json ...
	I0307 14:42:16.405430    4258 start.go:360] acquireMachinesLock for stopped-upgrade-491000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:42:16.405461    4258 start.go:364] duration metric: took 23.375µs to acquireMachinesLock for "stopped-upgrade-491000"
	I0307 14:42:16.405470    4258 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:42:16.405475    4258 fix.go:54] fixHost starting: 
	I0307 14:42:16.405609    4258 fix.go:112] recreateIfNeeded on stopped-upgrade-491000: state=Stopped err=<nil>
	W0307 14:42:16.405619    4258 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:42:16.409772    4258 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-491000" ...
	I0307 14:42:14.575928    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:42:14.575941    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:42:14.590050    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:42:14.590063    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:42:14.609574    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:42:14.609586    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:42:14.633279    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:42:14.633291    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:42:14.650506    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:42:14.650517    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:42:14.661889    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:42:14.661901    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:42:14.674188    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:42:14.674199    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:42:14.685954    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:42:14.685968    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:42:17.203419    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:16.417800    4258 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50471-:22,hostfwd=tcp::50472-:2376,hostname=stopped-upgrade-491000 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/disk.qcow2
	I0307 14:42:16.467283    4258 main.go:141] libmachine: STDOUT: 
	I0307 14:42:16.467320    4258 main.go:141] libmachine: STDERR: 
	I0307 14:42:16.467326    4258 main.go:141] libmachine: Waiting for VM to start (ssh -p 50471 docker@127.0.0.1)...
	I0307 14:42:22.205210    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:22.205383    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:42:22.217130    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:42:22.217208    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:42:22.227892    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:42:22.227987    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:42:22.238918    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:42:22.238981    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:42:22.249380    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:42:22.249442    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:42:22.260317    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:42:22.260380    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:42:22.273695    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:42:22.273759    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:42:22.284067    4112 logs.go:276] 0 containers: []
	W0307 14:42:22.284079    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:42:22.284134    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:42:22.295203    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:42:22.295221    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:42:22.295228    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:42:22.330466    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:42:22.330478    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:42:22.342206    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:42:22.342216    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:42:22.346565    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:42:22.346572    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:42:22.360206    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:42:22.360218    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:42:22.377360    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:42:22.377369    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:42:22.389508    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:42:22.389520    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:42:22.413020    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:42:22.413032    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:42:22.450111    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:42:22.450119    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:42:22.464590    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:42:22.464600    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:42:22.476194    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:42:22.476207    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:42:22.500429    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:42:22.500440    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:42:22.512863    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:42:22.512873    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:42:22.533609    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:42:22.533620    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:42:22.556775    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:42:22.556786    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:42:22.568282    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:42:22.568293    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:42:22.583668    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:42:22.583680    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:42:25.103467    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:30.105977    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:30.106386    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:42:30.139620    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:42:30.139757    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:42:30.158714    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:42:30.158812    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:42:30.173452    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:42:30.173533    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:42:30.189489    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:42:30.189561    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:42:30.200456    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:42:30.200524    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:42:30.214793    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:42:30.214864    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:42:30.225007    4112 logs.go:276] 0 containers: []
	W0307 14:42:30.225018    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:42:30.225074    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:42:30.235707    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:42:30.235727    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:42:30.235733    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:42:30.240458    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:42:30.240464    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:42:30.280064    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:42:30.280077    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:42:30.292567    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:42:30.292578    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:42:30.304830    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:42:30.304842    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:42:30.319431    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:42:30.319442    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:42:30.331802    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:42:30.331816    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:42:30.349776    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:42:30.349787    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:42:30.364820    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:42:30.364831    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:42:30.376642    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:42:30.376654    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:42:30.388855    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:42:30.388867    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:42:30.413512    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:42:30.413523    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:42:30.451112    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:42:30.451120    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:42:30.470431    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:42:30.470444    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:42:30.485190    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:42:30.485200    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:42:30.497112    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:42:30.497125    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:42:30.516704    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:42:30.516715    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:42:33.037930    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:35.918220    4258 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/config.json ...
	I0307 14:42:35.918706    4258 machine.go:94] provisionDockerMachine start ...
	I0307 14:42:35.918803    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:35.919066    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:35.919075    4258 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 14:42:35.989267    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 14:42:35.989284    4258 buildroot.go:166] provisioning hostname "stopped-upgrade-491000"
	I0307 14:42:35.989360    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:35.989492    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:35.989500    4258 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-491000 && echo "stopped-upgrade-491000" | sudo tee /etc/hostname
	I0307 14:42:36.054504    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-491000
	
	I0307 14:42:36.054562    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:36.054685    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:36.054694    4258 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-491000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-491000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-491000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 14:42:36.116493    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 14:42:36.116509    4258 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18320-1153/.minikube CaCertPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18320-1153/.minikube}
	I0307 14:42:36.116524    4258 buildroot.go:174] setting up certificates
	I0307 14:42:36.116530    4258 provision.go:84] configureAuth start
	I0307 14:42:36.116534    4258 provision.go:143] copyHostCerts
	I0307 14:42:36.116623    4258 exec_runner.go:144] found /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.pem, removing ...
	I0307 14:42:36.116634    4258 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.pem
	I0307 14:42:36.116765    4258 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.pem (1078 bytes)
	I0307 14:42:36.116982    4258 exec_runner.go:144] found /Users/jenkins/minikube-integration/18320-1153/.minikube/cert.pem, removing ...
	I0307 14:42:36.116987    4258 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18320-1153/.minikube/cert.pem
	I0307 14:42:36.117044    4258 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18320-1153/.minikube/cert.pem (1123 bytes)
	I0307 14:42:36.117183    4258 exec_runner.go:144] found /Users/jenkins/minikube-integration/18320-1153/.minikube/key.pem, removing ...
	I0307 14:42:36.117188    4258 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18320-1153/.minikube/key.pem
	I0307 14:42:36.117243    4258 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18320-1153/.minikube/key.pem (1679 bytes)
	I0307 14:42:36.117375    4258 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-491000 san=[127.0.0.1 localhost minikube stopped-upgrade-491000]
	I0307 14:42:36.175574    4258 provision.go:177] copyRemoteCerts
	I0307 14:42:36.175613    4258 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 14:42:36.175621    4258 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/id_rsa Username:docker}
	I0307 14:42:36.203567    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 14:42:36.210657    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0307 14:42:36.217426    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 14:42:36.224210    4258 provision.go:87] duration metric: took 107.668792ms to configureAuth
	I0307 14:42:36.224219    4258 buildroot.go:189] setting minikube options for container-runtime
	I0307 14:42:36.224325    4258 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:42:36.224365    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:36.224455    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:36.224461    4258 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 14:42:36.279911    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 14:42:36.279921    4258 buildroot.go:70] root file system type: tmpfs
	I0307 14:42:36.279971    4258 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 14:42:36.280013    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:36.280115    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:36.280147    4258 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 14:42:36.336683    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 14:42:36.336734    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:36.336842    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:36.336852    4258 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 14:42:36.668252    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 14:42:36.668271    4258 machine.go:97] duration metric: took 749.554083ms to provisionDockerMachine
	I0307 14:42:36.668278    4258 start.go:293] postStartSetup for "stopped-upgrade-491000" (driver="qemu2")
	I0307 14:42:36.668285    4258 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 14:42:36.668353    4258 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 14:42:36.668363    4258 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/id_rsa Username:docker}
	I0307 14:42:36.699681    4258 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 14:42:36.701177    4258 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 14:42:36.701184    4258 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18320-1153/.minikube/addons for local assets ...
	I0307 14:42:36.701255    4258 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18320-1153/.minikube/files for local assets ...
	I0307 14:42:36.701376    4258 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18320-1153/.minikube/files/etc/ssl/certs/15962.pem -> 15962.pem in /etc/ssl/certs
	I0307 14:42:36.701503    4258 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 14:42:36.704231    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/files/etc/ssl/certs/15962.pem --> /etc/ssl/certs/15962.pem (1708 bytes)
	I0307 14:42:36.711499    4258 start.go:296] duration metric: took 43.214875ms for postStartSetup
	I0307 14:42:36.711511    4258 fix.go:56] duration metric: took 20.305998208s for fixHost
	I0307 14:42:36.711543    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:36.711645    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:36.711650    4258 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 14:42:36.767733    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709851357.210679629
	
	I0307 14:42:36.767740    4258 fix.go:216] guest clock: 1709851357.210679629
	I0307 14:42:36.767744    4258 fix.go:229] Guest: 2024-03-07 14:42:37.210679629 -0800 PST Remote: 2024-03-07 14:42:36.711513 -0800 PST m=+20.438249001 (delta=499.166629ms)
	I0307 14:42:36.767755    4258 fix.go:200] guest clock delta is within tolerance: 499.166629ms
	I0307 14:42:36.767757    4258 start.go:83] releasing machines lock for "stopped-upgrade-491000", held for 20.362252833s
	I0307 14:42:36.767823    4258 ssh_runner.go:195] Run: cat /version.json
	I0307 14:42:36.767828    4258 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 14:42:36.767832    4258 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/id_rsa Username:docker}
	I0307 14:42:36.767844    4258 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/id_rsa Username:docker}
	W0307 14:42:36.768388    4258 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50471: connect: connection refused
	I0307 14:42:36.768413    4258 retry.go:31] will retry after 315.077034ms: dial tcp [::1]:50471: connect: connection refused
	W0307 14:42:36.796587    4258 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0307 14:42:36.796639    4258 ssh_runner.go:195] Run: systemctl --version
	I0307 14:42:36.798651    4258 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 14:42:36.800494    4258 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 14:42:36.800522    4258 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0307 14:42:36.803601    4258 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0307 14:42:36.808227    4258 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 14:42:36.808234    4258 start.go:494] detecting cgroup driver to use...
	I0307 14:42:36.808308    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 14:42:36.815562    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0307 14:42:36.818978    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 14:42:36.821955    4258 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 14:42:36.821996    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 14:42:36.825157    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 14:42:36.828110    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 14:42:36.830962    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 14:42:36.833545    4258 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 14:42:36.836521    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 14:42:36.840002    4258 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 14:42:36.842659    4258 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 14:42:36.845088    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:42:36.911974    4258 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 14:42:36.918381    4258 start.go:494] detecting cgroup driver to use...
	I0307 14:42:36.918452    4258 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 14:42:36.925337    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 14:42:36.931254    4258 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 14:42:36.938149    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 14:42:36.942969    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 14:42:36.947604    4258 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 14:42:37.009368    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 14:42:37.014748    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 14:42:37.020647    4258 ssh_runner.go:195] Run: which cri-dockerd
	I0307 14:42:37.022151    4258 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 14:42:37.024933    4258 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 14:42:37.029919    4258 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 14:42:37.095728    4258 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 14:42:37.162960    4258 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 14:42:37.163031    4258 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 14:42:37.168954    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:42:37.236970    4258 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 14:42:37.352342    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 14:42:37.357112    4258 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 14:42:37.362191    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 14:42:37.366609    4258 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 14:42:37.432079    4258 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 14:42:37.494141    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:42:37.563037    4258 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 14:42:37.568838    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 14:42:37.573664    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:42:37.638784    4258 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 14:42:37.677752    4258 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 14:42:37.677829    4258 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 14:42:37.680594    4258 start.go:562] Will wait 60s for crictl version
	I0307 14:42:37.680662    4258 ssh_runner.go:195] Run: which crictl
	I0307 14:42:37.681994    4258 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 14:42:37.696823    4258 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0307 14:42:37.696900    4258 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 14:42:37.713823    4258 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 14:42:38.038501    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:38.038586    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:42:38.050934    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:42:38.051009    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:42:38.063486    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:42:38.063560    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:42:38.079547    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:42:38.079620    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:42:38.095902    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:42:38.095973    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:42:38.107536    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:42:38.107610    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:42:38.119003    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:42:38.119082    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:42:38.130573    4112 logs.go:276] 0 containers: []
	W0307 14:42:38.130584    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:42:38.130648    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:42:38.141987    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:42:38.142009    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:42:38.142016    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:42:38.182792    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:42:38.182805    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:42:38.195450    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:42:38.195461    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:42:38.209073    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:42:38.209087    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:42:38.224480    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:42:38.224492    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:42:38.248499    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:42:38.248514    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:42:38.253786    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:42:38.253798    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:42:38.273920    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:42:38.273933    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:42:38.287235    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:42:38.287246    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:42:38.304050    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:42:38.304062    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:42:38.318845    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:42:38.318857    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:42:38.342813    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:42:38.342826    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:42:38.381699    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:42:38.381714    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:42:38.401498    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:42:38.401513    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:42:38.415433    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:42:38.415444    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:42:38.431515    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:42:38.431531    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:42:38.453004    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:42:38.453017    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:42:37.735412    4258 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0307 14:42:37.735545    4258 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0307 14:42:37.737026    4258 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 14:42:37.740649    4258 kubeadm.go:877] updating cluster {Name:stopped-upgrade-491000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0307 14:42:37.740699    4258 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 14:42:37.740742    4258 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 14:42:37.751515    4258 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 14:42:37.751524    4258 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 14:42:37.751576    4258 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 14:42:37.755211    4258 ssh_runner.go:195] Run: which lz4
	I0307 14:42:37.756428    4258 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0307 14:42:37.757773    4258 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 14:42:37.757783    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0307 14:42:38.544464    4258 docker.go:649] duration metric: took 788.058833ms to copy over tarball
	I0307 14:42:38.544526    4258 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 14:42:39.718690    4258 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.174143833s)
	I0307 14:42:39.718705    4258 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 14:42:39.734468    4258 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 14:42:39.737494    4258 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0307 14:42:39.742441    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:42:39.807028    4258 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 14:42:40.968620    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:41.499685    4258 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.692637167s)
	I0307 14:42:41.499768    4258 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 14:42:41.513790    4258 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 14:42:41.513800    4258 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 14:42:41.513805    4258 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0307 14:42:41.520127    4258 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:42:41.520194    4258 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:42:41.520240    4258 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:42:41.520293    4258 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:42:41.520323    4258 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:42:41.520371    4258 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0307 14:42:41.520873    4258 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0307 14:42:41.521188    4258 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:42:41.529974    4258 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:42:41.530066    4258 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0307 14:42:41.530119    4258 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:42:41.530137    4258 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:42:41.530361    4258 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:42:41.530369    4258 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0307 14:42:41.530281    4258 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:42:41.530282    4258 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W0307 14:42:43.474838    4258 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0307 14:42:43.475548    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:42:43.514944    4258 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0307 14:42:43.514993    4258 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:42:43.515097    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:42:43.536069    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0307 14:42:43.536253    4258 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0307 14:42:43.538904    4258 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0307 14:42:43.538929    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0307 14:42:43.575901    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:42:43.579974    4258 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0307 14:42:43.579985    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0307 14:42:43.593423    4258 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0307 14:42:43.593446    4258 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:42:43.593505    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:42:43.615596    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0307 14:42:43.621699    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0307 14:42:43.633048    4258 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0307 14:42:43.633093    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0307 14:42:43.633112    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:42:43.633180    4258 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0307 14:42:43.633193    4258 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0307 14:42:43.633210    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0307 14:42:43.643236    4258 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0307 14:42:43.643261    4258 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0307 14:42:43.643320    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0307 14:42:43.658528    4258 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0307 14:42:43.658559    4258 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:42:43.658564    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0307 14:42:43.658618    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:42:43.658670    4258 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0307 14:42:43.662374    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:42:43.669274    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:42:43.670559    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0307 14:42:43.684364    4258 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0307 14:42:43.684382    4258 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0307 14:42:43.684399    4258 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:42:43.684395    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0307 14:42:43.684442    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:42:43.684540    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0307 14:42:43.689318    4258 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0307 14:42:43.689336    4258 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:42:43.689387    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:42:43.696320    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0307 14:42:43.703593    4258 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0307 14:42:43.703607    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0307 14:42:43.703948    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0307 14:42:43.730986    4258 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0307 14:42:44.187716    4258 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0307 14:42:44.188286    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:42:44.227903    4258 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0307 14:42:44.227940    4258 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:42:44.228049    4258 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:42:44.256187    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0307 14:42:44.256364    4258 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0307 14:42:44.258500    4258 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0307 14:42:44.258522    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0307 14:42:44.291532    4258 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0307 14:42:44.291552    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0307 14:42:44.530708    4258 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0307 14:42:44.530749    4258 cache_images.go:92] duration metric: took 3.016928208s to LoadCachedImages
	W0307 14:42:44.530787    4258 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0307 14:42:44.530792    4258 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0307 14:42:44.530838    4258 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-491000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 14:42:44.530906    4258 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 14:42:44.549604    4258 cni.go:84] Creating CNI manager for ""
	I0307 14:42:44.549615    4258 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:42:44.549621    4258 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 14:42:44.549630    4258 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-491000 NodeName:stopped-upgrade-491000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 14:42:44.549702    4258 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-491000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 14:42:44.549756    4258 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0307 14:42:44.552661    4258 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 14:42:44.552688    4258 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 14:42:44.555725    4258 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0307 14:42:44.560808    4258 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 14:42:44.565715    4258 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0307 14:42:44.570610    4258 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0307 14:42:44.571850    4258 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 14:42:44.575809    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:42:44.643497    4258 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 14:42:44.650164    4258 certs.go:68] Setting up /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000 for IP: 10.0.2.15
	I0307 14:42:44.650172    4258 certs.go:194] generating shared ca certs ...
	I0307 14:42:44.650188    4258 certs.go:226] acquiring lock for ca certs: {Name:mkd070f5c39ed755a49409426e8106f4ed85635d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:42:44.650343    4258 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.key
	I0307 14:42:44.650394    4258 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.key
	I0307 14:42:44.650404    4258 certs.go:256] generating profile certs ...
	I0307 14:42:44.650485    4258 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/client.key
	I0307 14:42:44.650502    4258 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.key.e7c187b8
	I0307 14:42:44.650512    4258 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.crt.e7c187b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0307 14:42:44.740661    4258 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.crt.e7c187b8 ...
	I0307 14:42:44.740676    4258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.crt.e7c187b8: {Name:mkb1ccdb7c8c63a52370c7d14717a816578c4d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:42:44.740988    4258 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.key.e7c187b8 ...
	I0307 14:42:44.740993    4258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.key.e7c187b8: {Name:mk4abdd846f6791dd793d25a1f785075eadbe389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:42:44.741131    4258 certs.go:381] copying /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.crt.e7c187b8 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.crt
	I0307 14:42:44.741925    4258 certs.go:385] copying /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.key.e7c187b8 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.key
	I0307 14:42:44.742118    4258 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/proxy-client.key
	I0307 14:42:44.742251    4258 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/1596.pem (1338 bytes)
	W0307 14:42:44.742280    4258 certs.go:480] ignoring /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/1596_empty.pem, impossibly tiny 0 bytes
	I0307 14:42:44.742286    4258 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 14:42:44.742304    4258 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem (1078 bytes)
	I0307 14:42:44.742319    4258 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem (1123 bytes)
	I0307 14:42:44.742335    4258 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/key.pem (1679 bytes)
	I0307 14:42:44.742373    4258 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/files/etc/ssl/certs/15962.pem (1708 bytes)
	I0307 14:42:44.742678    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 14:42:44.749570    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 14:42:44.756528    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 14:42:44.763906    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 14:42:44.771350    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0307 14:42:44.778502    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 14:42:44.785344    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 14:42:44.792157    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 14:42:44.799717    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 14:42:44.806672    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/1596.pem --> /usr/share/ca-certificates/1596.pem (1338 bytes)
	I0307 14:42:44.813364    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/files/etc/ssl/certs/15962.pem --> /usr/share/ca-certificates/15962.pem (1708 bytes)
	I0307 14:42:44.820026    4258 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 14:42:44.825255    4258 ssh_runner.go:195] Run: openssl version
	I0307 14:42:44.827110    4258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 14:42:44.830018    4258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 14:42:44.831373    4258 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I0307 14:42:44.831396    4258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 14:42:44.833280    4258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 14:42:44.836475    4258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1596.pem && ln -fs /usr/share/ca-certificates/1596.pem /etc/ssl/certs/1596.pem"
	I0307 14:42:44.839807    4258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1596.pem
	I0307 14:42:44.841233    4258 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 21:56 /usr/share/ca-certificates/1596.pem
	I0307 14:42:44.841255    4258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1596.pem
	I0307 14:42:44.843006    4258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1596.pem /etc/ssl/certs/51391683.0"
	I0307 14:42:44.845824    4258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15962.pem && ln -fs /usr/share/ca-certificates/15962.pem /etc/ssl/certs/15962.pem"
	I0307 14:42:44.848839    4258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15962.pem
	I0307 14:42:44.850337    4258 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 21:56 /usr/share/ca-certificates/15962.pem
	I0307 14:42:44.850357    4258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15962.pem
	I0307 14:42:44.852137    4258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15962.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 14:42:44.855691    4258 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 14:42:44.857126    4258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 14:42:44.859901    4258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 14:42:44.861764    4258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 14:42:44.863730    4258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 14:42:44.865736    4258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 14:42:44.867486    4258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 14:42:44.869315    4258 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-491000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 14:42:44.869381    4258 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 14:42:44.879475    4258 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 14:42:44.882929    4258 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 14:42:44.882935    4258 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 14:42:44.882938    4258 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 14:42:44.882966    4258 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 14:42:44.885887    4258 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 14:42:44.886193    4258 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-491000" does not appear in /Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:42:44.886291    4258 kubeconfig.go:62] /Users/jenkins/minikube-integration/18320-1153/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-491000" cluster setting kubeconfig missing "stopped-upgrade-491000" context setting]
	I0307 14:42:44.886542    4258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/kubeconfig: {Name:mk8faa4add6483df163f8e33c2463984a39a544c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:42:44.886976    4258 kapi.go:59] client config for stopped-upgrade-491000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/client.key", CAFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c7b6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 14:42:44.887294    4258 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 14:42:44.890495    4258 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-491000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0307 14:42:44.890501    4258 kubeadm.go:1153] stopping kube-system containers ...
	I0307 14:42:44.890552    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 14:42:44.901324    4258 docker.go:483] Stopping containers: [e4e19e63a4f0 dd38b5bb91df f3df7e225915 2654012828a3 b8f8162b3de6 9b1325ed02f8 be0f33ea35aa 6e9d4e2d34c9]
	I0307 14:42:44.901388    4258 ssh_runner.go:195] Run: docker stop e4e19e63a4f0 dd38b5bb91df f3df7e225915 2654012828a3 b8f8162b3de6 9b1325ed02f8 be0f33ea35aa 6e9d4e2d34c9
	I0307 14:42:44.911973    4258 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 14:42:44.917995    4258 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 14:42:44.920618    4258 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 14:42:44.920623    4258 kubeadm.go:156] found existing configuration files:
	
	I0307 14:42:44.920650    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0307 14:42:44.923243    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 14:42:44.923266    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 14:42:44.925730    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0307 14:42:44.928395    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 14:42:44.928427    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 14:42:44.931679    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0307 14:42:44.934565    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 14:42:44.934591    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 14:42:44.937196    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0307 14:42:44.940188    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 14:42:44.940215    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 14:42:44.943349    4258 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 14:42:44.946174    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:42:44.971521    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:42:45.377027    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:42:45.494526    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:42:45.516427    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:42:45.540272    4258 api_server.go:52] waiting for apiserver process to appear ...
	I0307 14:42:45.540363    4258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 14:42:46.042392    4258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 14:42:45.970906    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:45.971101    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:42:45.993969    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:42:45.994073    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:42:46.008715    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:42:46.008797    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:42:46.026439    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:42:46.026542    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:42:46.037221    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:42:46.037295    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:42:46.048321    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:42:46.048389    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:42:46.060117    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:42:46.060180    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:42:46.071555    4112 logs.go:276] 0 containers: []
	W0307 14:42:46.071568    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:42:46.071635    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:42:46.084178    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:42:46.084196    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:42:46.084202    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:42:46.124273    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:42:46.124294    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:42:46.131211    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:42:46.131226    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:42:46.153688    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:42:46.153714    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:42:46.173039    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:42:46.173056    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:42:46.197773    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:42:46.197787    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:42:46.211980    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:42:46.211991    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:42:46.233711    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:42:46.233721    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:42:46.246718    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:42:46.246731    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:42:46.286938    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:42:46.286958    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:42:46.300682    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:42:46.300695    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:42:46.314093    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:42:46.314107    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:42:46.327508    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:42:46.327524    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:42:46.340854    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:42:46.340870    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:42:46.357372    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:42:46.357387    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:42:46.370572    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:42:46.370588    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:42:46.386620    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:42:46.386634    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:42:48.901951    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:46.542415    4258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 14:42:46.546892    4258 api_server.go:72] duration metric: took 1.006617708s to wait for apiserver process to appear ...
	I0307 14:42:46.546902    4258 api_server.go:88] waiting for apiserver healthz status ...
	I0307 14:42:46.546911    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:53.904208    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:53.904426    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:42:53.925769    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:42:53.925892    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:42:53.941669    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:42:53.941758    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:42:53.953834    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:42:53.953909    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:42:53.964170    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:42:53.964238    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:42:53.974528    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:42:53.974593    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:42:53.985124    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:42:53.985191    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:42:53.995975    4112 logs.go:276] 0 containers: []
	W0307 14:42:53.995986    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:42:53.996049    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:42:54.006150    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:42:54.006170    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:42:54.006176    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:42:54.025859    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:42:54.025873    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:42:54.038001    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:42:54.038011    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:42:54.052900    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:42:54.052913    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:42:54.069787    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:42:54.069797    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:42:54.081524    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:42:54.081535    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:42:54.086310    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:42:54.086317    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:42:54.100335    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:42:54.100344    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:42:54.111449    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:42:54.111462    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:42:54.146476    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:42:54.146484    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:42:54.163054    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:42:54.163066    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:42:54.180058    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:42:54.180070    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:42:54.199225    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:42:54.199235    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:42:54.223617    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:42:54.223626    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:42:54.257696    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:42:54.257708    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:42:54.269920    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:42:54.269932    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:42:54.281824    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:42:54.281836    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:42:51.549021    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:51.549068    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:56.802338    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:56.549393    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:56.549456    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:01.803684    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:01.803834    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:01.819454    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:43:01.819526    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:01.830094    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:43:01.830165    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:01.840484    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:43:01.840555    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:01.851338    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:43:01.851401    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:01.861972    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:43:01.862038    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:01.872617    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:43:01.872688    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:01.882444    4112 logs.go:276] 0 containers: []
	W0307 14:43:01.882461    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:01.882527    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:01.892961    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:43:01.892978    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:43:01.892984    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:43:01.906970    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:43:01.906983    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:43:01.922228    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:43:01.922239    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:43:01.943720    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:43:01.943733    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:43:01.955054    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:01.955066    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:01.959472    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:43:01.959482    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:43:01.978622    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:43:01.978633    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:43:01.990075    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:43:01.990091    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:43:02.001599    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:43:02.001613    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:43:02.039369    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:43:02.039385    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:43:02.062560    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:43:02.062576    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:43:02.075139    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:43:02.075152    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:43:02.087983    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:02.087994    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:02.126373    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:43:02.126393    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:43:02.140485    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:43:02.140497    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:02.152795    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:02.152809    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:02.188622    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:02.188634    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:01.550039    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:01.550105    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:04.716427    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:06.550907    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:06.550930    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:09.718704    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:09.719148    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:09.735672    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:43:09.735755    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:09.746429    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:43:09.746491    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:09.766722    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:43:09.766797    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:09.779059    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:43:09.779138    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:09.789120    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:43:09.789194    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:09.800038    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:43:09.800103    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:09.809850    4112 logs.go:276] 0 containers: []
	W0307 14:43:09.809863    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:09.809923    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:09.820660    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:43:09.820675    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:43:09.820681    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:43:09.832198    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:43:09.832209    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:43:09.845930    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:43:09.845943    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:43:09.864586    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:43:09.864596    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:43:09.882703    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:43:09.882721    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:09.894977    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:09.894989    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:09.899366    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:43:09.899373    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:43:09.913551    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:09.913561    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:09.936061    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:43:09.936069    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:43:09.950695    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:43:09.950706    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:43:09.963125    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:43:09.963136    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:43:09.976013    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:43:09.976024    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:43:09.987102    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:09.987114    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:10.026696    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:43:10.026708    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:43:10.045978    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:43:10.045990    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:43:10.057636    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:10.057647    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:10.094779    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:43:10.094788    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:43:12.612185    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:11.551752    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:11.551844    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:17.614742    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:17.615167    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:17.655937    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:43:17.656068    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:17.677257    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:43:17.677385    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:17.692454    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:43:17.692542    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:17.705354    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:43:17.705429    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:17.716018    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:43:17.716088    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:17.726356    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:43:17.726428    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:17.736395    4112 logs.go:276] 0 containers: []
	W0307 14:43:17.736406    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:17.736464    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:17.747069    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:43:17.747088    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:17.747094    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:17.782162    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:17.782172    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:17.821454    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:43:17.821468    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:43:17.833069    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:43:17.833079    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:43:17.844432    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:43:17.844443    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:43:17.858822    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:17.858835    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:17.863344    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:43:17.863359    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:43:17.880991    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:43:17.881003    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:43:17.892687    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:43:17.892699    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:43:17.909563    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:43:17.909578    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:43:17.921327    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:43:17.921337    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:43:17.935097    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:17.935108    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:17.957132    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:43:17.957142    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:43:17.971319    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:43:17.971330    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:43:17.991656    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:43:17.991668    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:43:18.009468    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:43:18.009477    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:43:18.021348    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:43:18.021358    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:16.553277    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:16.553343    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:20.536488    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:21.554800    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:21.554851    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:25.539169    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:25.539562    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:25.572692    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:43:25.572830    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:25.592605    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:43:25.592706    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:25.606856    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:43:25.606931    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:25.621931    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:43:25.622006    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:25.632819    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:43:25.632891    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:25.643866    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:43:25.643930    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:25.653996    4112 logs.go:276] 0 containers: []
	W0307 14:43:25.654010    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:25.654070    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:25.664353    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:43:25.664370    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:25.664376    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:25.703320    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:43:25.703333    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:43:25.721086    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:43:25.721096    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:43:25.732341    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:43:25.732351    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:43:25.744113    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:43:25.744125    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:43:25.755848    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:43:25.755859    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:43:25.767202    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:43:25.767212    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:25.778750    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:43:25.778761    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:43:25.793222    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:43:25.793233    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:43:25.805251    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:43:25.805265    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:43:25.821113    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:43:25.821126    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:43:25.838482    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:25.838492    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:25.875961    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:25.875970    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:25.880676    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:43:25.880684    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:43:25.894634    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:43:25.894645    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:43:25.914340    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:43:25.914353    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:43:25.928890    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:25.928902    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:28.456114    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:26.555694    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:26.555820    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:33.457192    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:33.457346    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:33.469538    4112 logs.go:276] 2 containers: [31d62bf3ca57 d06a3786ab57]
	I0307 14:43:33.469618    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:33.481236    4112 logs.go:276] 2 containers: [805bd6838f41 61231a9d2be0]
	I0307 14:43:33.481314    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:33.492832    4112 logs.go:276] 1 containers: [b39c80dbb3f1]
	I0307 14:43:33.492899    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:33.503542    4112 logs.go:276] 2 containers: [02948cf3104b 231c71fe1af3]
	I0307 14:43:33.503609    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:33.514008    4112 logs.go:276] 1 containers: [e6596f53a35c]
	I0307 14:43:33.514071    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:33.524917    4112 logs.go:276] 2 containers: [1c4dd65b4f99 901e9188be70]
	I0307 14:43:33.524977    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:33.534994    4112 logs.go:276] 0 containers: []
	W0307 14:43:33.535007    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:33.535071    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:33.545295    4112 logs.go:276] 2 containers: [24b3c7771004 60fa0d7d0ecf]
	I0307 14:43:33.545314    4112 logs.go:123] Gathering logs for coredns [b39c80dbb3f1] ...
	I0307 14:43:33.545321    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39c80dbb3f1"
	I0307 14:43:33.557326    4112 logs.go:123] Gathering logs for kube-scheduler [02948cf3104b] ...
	I0307 14:43:33.557342    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02948cf3104b"
	I0307 14:43:33.569558    4112 logs.go:123] Gathering logs for kube-controller-manager [1c4dd65b4f99] ...
	I0307 14:43:33.569569    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c4dd65b4f99"
	I0307 14:43:33.587251    4112 logs.go:123] Gathering logs for kube-controller-manager [901e9188be70] ...
	I0307 14:43:33.587261    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901e9188be70"
	I0307 14:43:33.598426    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:33.598437    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:33.621409    4112 logs.go:123] Gathering logs for etcd [61231a9d2be0] ...
	I0307 14:43:33.621417    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61231a9d2be0"
	I0307 14:43:33.638713    4112 logs.go:123] Gathering logs for kube-proxy [e6596f53a35c] ...
	I0307 14:43:33.638733    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6596f53a35c"
	I0307 14:43:33.654475    4112 logs.go:123] Gathering logs for storage-provisioner [60fa0d7d0ecf] ...
	I0307 14:43:33.654486    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60fa0d7d0ecf"
	I0307 14:43:33.665886    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:33.665902    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:33.702028    4112 logs.go:123] Gathering logs for kube-apiserver [31d62bf3ca57] ...
	I0307 14:43:33.702045    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31d62bf3ca57"
	I0307 14:43:33.718802    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:33.718812    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:33.723162    4112 logs.go:123] Gathering logs for kube-apiserver [d06a3786ab57] ...
	I0307 14:43:33.723169    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06a3786ab57"
	I0307 14:43:33.741792    4112 logs.go:123] Gathering logs for etcd [805bd6838f41] ...
	I0307 14:43:33.741803    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805bd6838f41"
	I0307 14:43:33.755772    4112 logs.go:123] Gathering logs for kube-scheduler [231c71fe1af3] ...
	I0307 14:43:33.755784    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 231c71fe1af3"
	I0307 14:43:33.770490    4112 logs.go:123] Gathering logs for storage-provisioner [24b3c7771004] ...
	I0307 14:43:33.770505    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b3c7771004"
	I0307 14:43:33.782064    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:43:33.782078    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:33.794458    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:33.794471    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:31.557834    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:31.557884    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:36.332353    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:36.560193    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:36.560219    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:41.333837    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:41.333932    4112 kubeadm.go:591] duration metric: took 4m3.987681875s to restartPrimaryControlPlane
	W0307 14:43:41.334034    4112 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0307 14:43:41.334080    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0307 14:43:42.349383    4112 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.015286042s)
	I0307 14:43:42.349453    4112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 14:43:42.354610    4112 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 14:43:42.357568    4112 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 14:43:42.360367    4112 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 14:43:42.360372    4112 kubeadm.go:156] found existing configuration files:
	
	I0307 14:43:42.360397    4112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/admin.conf
	I0307 14:43:42.362684    4112 kubeadm.go:162] "https://control-plane.minikube.internal:50298" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 14:43:42.362710    4112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 14:43:42.365724    4112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/kubelet.conf
	I0307 14:43:42.368841    4112 kubeadm.go:162] "https://control-plane.minikube.internal:50298" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 14:43:42.368865    4112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 14:43:42.371579    4112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/controller-manager.conf
	I0307 14:43:42.374238    4112 kubeadm.go:162] "https://control-plane.minikube.internal:50298" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 14:43:42.374271    4112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 14:43:42.377781    4112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/scheduler.conf
	I0307 14:43:42.380844    4112 kubeadm.go:162] "https://control-plane.minikube.internal:50298" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50298 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 14:43:42.380875    4112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 14:43:42.383577    4112 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 14:43:42.401271    4112 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0307 14:43:42.401330    4112 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 14:43:42.448806    4112 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 14:43:42.448861    4112 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 14:43:42.448910    4112 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 14:43:42.501390    4112 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 14:43:42.506290    4112 out.go:204]   - Generating certificates and keys ...
	I0307 14:43:42.506328    4112 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 14:43:42.506364    4112 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 14:43:42.506408    4112 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 14:43:42.506443    4112 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0307 14:43:42.506483    4112 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 14:43:42.506511    4112 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0307 14:43:42.506548    4112 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0307 14:43:42.506588    4112 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0307 14:43:42.506631    4112 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 14:43:42.506668    4112 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 14:43:42.506686    4112 kubeadm.go:309] [certs] Using the existing "sa" key
	I0307 14:43:42.506712    4112 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 14:43:42.580402    4112 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 14:43:42.647187    4112 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 14:43:42.714404    4112 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 14:43:42.798059    4112 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 14:43:42.825036    4112 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 14:43:42.825528    4112 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 14:43:42.825550    4112 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 14:43:42.922424    4112 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 14:43:42.926547    4112 out.go:204]   - Booting up control plane ...
	I0307 14:43:42.926596    4112 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 14:43:42.926648    4112 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 14:43:42.926680    4112 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 14:43:42.926716    4112 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 14:43:42.926862    4112 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 14:43:41.560991    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:41.561045    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:47.426352    4112 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.503999 seconds
	I0307 14:43:47.426443    4112 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 14:43:47.431704    4112 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 14:43:47.956976    4112 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 14:43:47.957220    4112 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-974000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 14:43:48.459958    4112 kubeadm.go:309] [bootstrap-token] Using token: r77cp9.ivp7f2h88tolpeh7
	I0307 14:43:48.466661    4112 out.go:204]   - Configuring RBAC rules ...
	I0307 14:43:48.466720    4112 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 14:43:48.466765    4112 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 14:43:48.468724    4112 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 14:43:48.470265    4112 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 14:43:48.470925    4112 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 14:43:48.471935    4112 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 14:43:48.475319    4112 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 14:43:48.636379    4112 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 14:43:48.864243    4112 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 14:43:48.864686    4112 kubeadm.go:309] 
	I0307 14:43:48.864714    4112 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 14:43:48.864717    4112 kubeadm.go:309] 
	I0307 14:43:48.864751    4112 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 14:43:48.864756    4112 kubeadm.go:309] 
	I0307 14:43:48.864767    4112 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 14:43:48.864835    4112 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 14:43:48.864863    4112 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 14:43:48.864883    4112 kubeadm.go:309] 
	I0307 14:43:48.864913    4112 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 14:43:48.864919    4112 kubeadm.go:309] 
	I0307 14:43:48.864971    4112 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 14:43:48.864991    4112 kubeadm.go:309] 
	I0307 14:43:48.865089    4112 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 14:43:48.865126    4112 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 14:43:48.865185    4112 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 14:43:48.865190    4112 kubeadm.go:309] 
	I0307 14:43:48.865239    4112 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 14:43:48.865281    4112 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 14:43:48.865284    4112 kubeadm.go:309] 
	I0307 14:43:48.865318    4112 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token r77cp9.ivp7f2h88tolpeh7 \
	I0307 14:43:48.865380    4112 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dc223c8f61ede9166cd6af749ad18625b70792e87215b933ce92e49a8a2f5e5b \
	I0307 14:43:48.865404    4112 kubeadm.go:309] 	--control-plane 
	I0307 14:43:48.865413    4112 kubeadm.go:309] 
	I0307 14:43:48.865456    4112 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 14:43:48.865459    4112 kubeadm.go:309] 
	I0307 14:43:48.865500    4112 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token r77cp9.ivp7f2h88tolpeh7 \
	I0307 14:43:48.865555    4112 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dc223c8f61ede9166cd6af749ad18625b70792e87215b933ce92e49a8a2f5e5b 
	I0307 14:43:48.865610    4112 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 14:43:48.865614    4112 cni.go:84] Creating CNI manager for ""
	I0307 14:43:48.865622    4112 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:43:48.868347    4112 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 14:43:48.874375    4112 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 14:43:48.877686    4112 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 14:43:48.882843    4112 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 14:43:48.882899    4112 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 14:43:48.882902    4112 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-974000 minikube.k8s.io/updated_at=2024_03_07T14_43_48_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3e3656b8cff33aafa60dd2a07a4b34bce666a6a6 minikube.k8s.io/name=running-upgrade-974000 minikube.k8s.io/primary=true
	I0307 14:43:48.924513    4112 kubeadm.go:1106] duration metric: took 41.650291ms to wait for elevateKubeSystemPrivileges
	I0307 14:43:48.924532    4112 ops.go:34] apiserver oom_adj: -16
	W0307 14:43:48.924691    4112 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 14:43:48.924698    4112 kubeadm.go:393] duration metric: took 4m11.592726167s to StartCluster
	I0307 14:43:48.924707    4112 settings.go:142] acquiring lock: {Name:mke1b21879aa30c53aba22e595b1eacff0880bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:43:48.924826    4112 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:43:48.925246    4112 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/kubeconfig: {Name:mk8faa4add6483df163f8e33c2463984a39a544c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:43:48.925425    4112 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:43:48.929305    4112 out.go:177] * Verifying Kubernetes components...
	I0307 14:43:48.925517    4112 config.go:182] Loaded profile config "running-upgrade-974000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:43:48.925482    4112 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 14:43:48.937169    4112 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-974000"
	I0307 14:43:48.937184    4112 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-974000"
	W0307 14:43:48.937188    4112 addons.go:243] addon storage-provisioner should already be in state true
	I0307 14:43:48.937206    4112 host.go:66] Checking if "running-upgrade-974000" exists ...
	I0307 14:43:48.937220    4112 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-974000"
	I0307 14:43:48.937237    4112 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-974000"
	I0307 14:43:48.937254    4112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:43:48.938385    4112 kapi.go:59] client config for running-upgrade-974000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/running-upgrade-974000/client.key", CAFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1021df6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 14:43:48.938914    4112 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-974000"
	W0307 14:43:48.938919    4112 addons.go:243] addon default-storageclass should already be in state true
	I0307 14:43:48.938927    4112 host.go:66] Checking if "running-upgrade-974000" exists ...
	I0307 14:43:48.943256    4112 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:43:48.947340    4112 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 14:43:48.947346    4112 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 14:43:48.947353    4112 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/running-upgrade-974000/id_rsa Username:docker}
	I0307 14:43:48.948141    4112 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 14:43:48.948146    4112 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 14:43:48.948149    4112 sshutil.go:53] new ssh client: &{IP:localhost Port:50266 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/running-upgrade-974000/id_rsa Username:docker}
	I0307 14:43:49.020992    4112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 14:43:49.026311    4112 api_server.go:52] waiting for apiserver process to appear ...
	I0307 14:43:49.026362    4112 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 14:43:49.030987    4112 api_server.go:72] duration metric: took 105.549709ms to wait for apiserver process to appear ...
	I0307 14:43:49.030995    4112 api_server.go:88] waiting for apiserver healthz status ...
	I0307 14:43:49.031002    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:49.037589    4112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 14:43:49.040881    4112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 14:43:46.563347    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:46.563514    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:46.582155    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:43:46.582244    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:46.597047    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:43:46.597139    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:46.609497    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:43:46.609569    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:46.619760    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:43:46.619838    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:46.630100    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:43:46.630169    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:46.640447    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:43:46.640541    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:46.650887    4258 logs.go:276] 0 containers: []
	W0307 14:43:46.650898    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:46.650958    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:46.661696    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:43:46.661713    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:43:46.661718    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:43:46.679160    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:46.679170    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:46.704340    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:43:46.704349    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:46.715997    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:46.716006    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:46.720371    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:43:46.720381    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:43:46.734472    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:43:46.734483    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:43:46.745793    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:43:46.745804    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:43:46.761124    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:43:46.761134    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:43:46.775029    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:46.775040    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:46.884593    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:43:46.884605    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:43:46.929101    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:43:46.929115    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:43:46.942952    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:43:46.942964    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:43:46.954842    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:46.954855    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:46.995098    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:43:46.995114    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:43:47.011628    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:43:47.011640    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:43:47.027902    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:43:47.027915    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:43:47.041821    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:43:47.041834    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:43:49.556080    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:54.033123    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:54.033168    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:54.557211    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:54.557407    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:54.582216    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:43:54.582329    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:54.598787    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:43:54.598857    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:54.612209    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:43:54.612277    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:54.623516    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:43:54.623591    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:54.634153    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:43:54.634222    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:54.647854    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:43:54.647931    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:54.657973    4258 logs.go:276] 0 containers: []
	W0307 14:43:54.657986    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:54.658045    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:54.668472    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:43:54.668490    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:54.668502    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:54.706970    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:43:54.706979    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:43:54.721701    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:43:54.721716    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:43:54.733343    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:43:54.733357    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:54.745363    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:43:54.745373    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:43:54.763510    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:43:54.763521    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:43:54.778465    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:43:54.778480    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:43:54.797305    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:43:54.797317    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:43:54.808811    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:43:54.808822    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:43:54.824456    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:43:54.824473    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:43:54.836845    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:43:54.836856    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:43:54.849586    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:43:54.849598    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:43:54.862106    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:54.862117    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:54.866744    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:43:54.866750    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:43:54.905040    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:54.905054    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:54.929044    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:54.929054    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:54.966954    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:43:54.966967    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:43:59.033533    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:59.033582    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:57.487378    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:04.033920    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:04.033950    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:02.489310    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:02.489421    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:02.504506    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:02.504591    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:02.517308    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:02.517383    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:02.529183    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:02.529259    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:02.540724    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:02.540800    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:02.552721    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:02.552795    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:02.570547    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:02.570612    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:02.582045    4258 logs.go:276] 0 containers: []
	W0307 14:44:02.582061    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:02.582140    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:02.594430    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:02.594452    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:02.594458    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:02.607180    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:02.607199    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:02.647738    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:02.647763    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:02.661818    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:02.661828    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:02.674035    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:02.674047    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:02.688442    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:02.688463    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:02.704002    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:02.704020    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:02.708361    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:02.708370    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:02.750336    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:02.750349    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:02.764816    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:02.764828    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:02.777293    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:02.777304    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:02.802988    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:02.803006    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:02.842910    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:02.842924    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:02.857542    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:02.857557    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:02.869429    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:02.869443    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:02.885590    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:02.885604    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:02.905153    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:02.905167    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:05.420066    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:09.034382    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:09.034422    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:10.422487    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:10.422694    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:10.435352    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:10.435432    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:10.445953    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:10.446021    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:10.455984    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:10.456054    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:10.466401    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:10.466472    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:10.477098    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:10.477169    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:10.488083    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:10.488155    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:10.498108    4258 logs.go:276] 0 containers: []
	W0307 14:44:10.498119    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:10.498193    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:10.511434    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:10.511452    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:10.511458    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:10.548267    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:10.548280    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:10.552217    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:10.552225    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:10.591777    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:10.591793    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:10.604352    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:10.604367    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:10.622821    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:10.622834    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:10.635376    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:10.635386    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:10.651438    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:10.651448    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:10.674939    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:10.674946    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:10.688553    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:10.688563    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:10.701463    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:10.701473    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:10.713378    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:10.713389    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:10.748215    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:10.748226    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:10.787319    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:10.787331    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:10.808110    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:10.808121    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:10.822335    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:10.822346    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:10.833549    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:10.833561    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:14.035077    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:14.035176    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:13.352564    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:19.035995    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:19.036026    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0307 14:44:19.367531    4112 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0307 14:44:19.370802    4112 out.go:177] * Enabled addons: storage-provisioner
	I0307 14:44:19.377832    4112 addons.go:505] duration metric: took 30.452317833s for enable addons: enabled=[storage-provisioner]
	I0307 14:44:18.354911    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:18.355169    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:18.378935    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:18.379032    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:18.392458    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:18.392533    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:18.403857    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:18.403939    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:18.413968    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:18.414042    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:18.424532    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:18.424605    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:18.442315    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:18.442384    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:18.452635    4258 logs.go:276] 0 containers: []
	W0307 14:44:18.452646    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:18.452706    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:18.463384    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:18.463402    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:18.463418    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:18.479295    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:18.479307    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:18.493019    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:18.493029    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:18.507221    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:18.507234    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:18.518566    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:18.518576    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:18.529626    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:18.529637    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:18.540911    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:18.540922    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:18.564787    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:18.564798    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:18.608549    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:18.608559    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:18.619742    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:18.619754    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:18.634890    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:18.634904    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:18.652382    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:18.652394    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:18.693181    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:18.693193    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:18.707584    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:18.707595    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:18.719942    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:18.719954    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:18.759677    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:18.759701    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:18.764408    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:18.764419    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:21.278555    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:24.037038    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:24.037077    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:26.280849    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:26.281000    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:26.295942    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:26.296031    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:29.038377    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:29.038419    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:26.308145    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:26.310008    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:26.320529    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:26.320598    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:26.331049    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:26.331124    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:26.341800    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:26.341865    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:26.352348    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:26.352411    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:26.362732    4258 logs.go:276] 0 containers: []
	W0307 14:44:26.362740    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:26.362789    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:26.372993    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:26.373008    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:26.373014    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:26.409828    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:26.409838    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:26.413988    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:26.413995    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:26.428461    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:26.428471    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:26.445555    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:26.445568    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:26.456788    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:26.456799    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:26.481611    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:26.481621    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:26.519514    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:26.519526    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:26.533996    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:26.534005    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:26.545420    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:26.545436    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:26.561405    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:26.561415    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:26.572381    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:26.572394    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:26.584994    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:26.585008    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:26.620792    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:26.620802    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:26.635476    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:26.635489    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:26.648083    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:26.648093    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:26.659588    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:26.659599    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:29.172809    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:34.038967    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:34.039022    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:34.174992    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:34.175121    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:34.188574    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:34.188651    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:34.199287    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:34.199359    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:34.213995    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:34.214061    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:34.224736    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:34.224824    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:34.235110    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:34.235173    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:34.246808    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:34.246874    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:34.257399    4258 logs.go:276] 0 containers: []
	W0307 14:44:34.257410    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:34.257462    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:34.268121    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:34.268138    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:34.268143    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:34.283093    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:34.283104    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:34.294942    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:34.294952    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:34.312400    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:34.312411    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:34.325171    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:34.325180    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:34.336408    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:34.336418    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:34.371155    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:34.371167    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:34.383161    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:34.383171    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:34.394757    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:34.394769    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:34.399113    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:34.399119    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:34.437002    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:34.437015    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:34.451109    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:34.451123    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:34.491094    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:34.491108    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:34.505379    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:34.505390    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:34.516788    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:34.516800    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:34.530956    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:34.530965    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:34.542462    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:34.542472    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:39.041156    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:39.041221    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:37.067675    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:44.043535    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:44.043579    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:42.070038    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:42.070378    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:42.100282    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:42.100419    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:42.119254    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:42.119352    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:42.133266    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:42.133336    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:42.145059    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:42.145143    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:42.155859    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:42.155932    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:42.166889    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:42.166961    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:42.177346    4258 logs.go:276] 0 containers: []
	W0307 14:44:42.177359    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:42.177418    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:42.187926    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:42.187942    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:42.187947    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:42.225476    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:42.225489    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:42.240297    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:42.240308    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:42.258211    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:42.258223    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:42.276224    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:42.276234    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:42.287723    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:42.287733    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:42.299025    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:42.299036    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:42.311433    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:42.311442    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:42.330439    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:42.330448    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:42.354845    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:42.354853    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:42.369774    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:42.369789    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:42.374235    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:42.374242    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:42.409220    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:42.409234    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:42.446745    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:42.446755    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:42.461173    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:42.461184    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:42.472950    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:42.472963    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:42.487599    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:42.487610    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:45.000963    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:49.045847    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:49.045979    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:49.057557    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:44:49.057639    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:49.067921    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:44:49.067991    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:49.078280    4112 logs.go:276] 2 containers: [cf3188fb5d73 cb2252cd6914]
	I0307 14:44:49.078357    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:49.088895    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:44:49.088970    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:49.113636    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:44:49.113718    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:49.125784    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:44:49.125853    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:49.135863    4112 logs.go:276] 0 containers: []
	W0307 14:44:49.135876    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:49.135942    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:49.146961    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:44:49.146976    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:49.146982    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:49.151914    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:44:49.151920    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:44:49.163319    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:44:49.163329    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:44:49.178238    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:49.178251    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:44:49.212318    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:44:49.212410    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:44:49.213540    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:49.213548    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:49.250242    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:44:49.250254    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:44:49.265050    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:44:49.265062    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:44:49.279289    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:44:49.279304    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:44:49.290808    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:44:49.290819    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:44:49.305431    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:44:49.305441    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:44:49.323534    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:44:49.323545    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:44:49.335032    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:49.335042    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:49.358462    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:44:49.358471    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:49.370258    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:44:49.370269    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:44:49.370298    4112 out.go:239] X Problems detected in kubelet:
	W0307 14:44:49.370304    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:44:49.370308    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:44:49.370312    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:44:49.370315    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:44:50.003271    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:50.003498    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:50.026906    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:50.027009    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:50.042788    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:50.042875    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:50.055696    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:50.055775    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:50.066976    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:50.067042    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:50.077922    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:50.077990    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:50.088271    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:50.088344    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:50.098636    4258 logs.go:276] 0 containers: []
	W0307 14:44:50.098646    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:50.098701    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:50.113729    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:50.113746    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:50.113750    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:50.126259    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:50.126274    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:50.164253    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:50.164267    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:50.178607    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:50.178618    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:50.193542    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:50.193553    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:50.205686    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:50.205696    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:50.220126    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:50.220138    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:50.240568    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:50.240578    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:50.258788    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:50.258801    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:50.297899    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:50.297909    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:50.310763    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:50.310774    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:50.322622    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:50.322634    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:50.327304    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:50.327310    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:50.352850    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:50.352860    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:50.390380    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:50.390391    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:50.406107    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:50.406124    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:50.431442    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:50.431451    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:52.944251    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:59.374506    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:57.946981    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:57.947379    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:57.994297    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:57.994417    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:58.010352    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:58.010436    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:58.023355    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:58.023437    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:58.035296    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:58.035369    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:58.045746    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:58.045816    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:58.056099    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:58.056167    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:58.066490    4258 logs.go:276] 0 containers: []
	W0307 14:44:58.066502    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:58.066559    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:58.077222    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:58.077240    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:58.077246    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:58.081474    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:58.081485    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:58.120121    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:58.120134    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:58.138083    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:58.138095    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:58.174286    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:58.174299    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:58.187790    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:58.187800    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:58.201943    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:58.201957    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:58.219606    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:58.219619    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:58.231390    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:58.231401    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:58.256179    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:58.256188    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:58.267835    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:58.267845    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:58.285493    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:58.285504    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:58.297519    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:58.297529    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:58.334067    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:58.334075    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:58.349743    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:58.349760    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:58.360884    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:58.360896    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:58.372655    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:58.372666    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:00.889851    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:04.376817    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:04.377056    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:04.396052    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:45:04.396149    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:04.410122    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:45:04.410187    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:04.422157    4112 logs.go:276] 2 containers: [cf3188fb5d73 cb2252cd6914]
	I0307 14:45:04.422228    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:04.432683    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:45:04.432758    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:04.442738    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:45:04.442810    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:04.453219    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:45:04.453285    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:04.466322    4112 logs.go:276] 0 containers: []
	W0307 14:45:04.466333    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:04.466392    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:04.476571    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:45:04.476588    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:45:04.476593    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:04.490615    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:04.490629    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:45:04.524500    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:04.524592    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:04.525747    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:04.525754    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:04.530602    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:04.530608    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:05.892187    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:05.892323    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:05.908621    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:05.908703    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:05.921514    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:05.921615    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:05.932702    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:05.932770    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:05.947903    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:05.947975    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:05.959116    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:05.959194    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:05.969590    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:05.969663    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:05.979970    4258 logs.go:276] 0 containers: []
	W0307 14:45:05.979980    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:05.980036    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:05.995992    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:05.996009    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:05.996014    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:06.007970    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:06.007981    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:06.031173    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:06.031181    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:06.043094    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:06.043104    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:06.078201    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:06.078215    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:06.092222    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:06.092231    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:06.104244    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:06.104256    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:06.122466    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:06.122477    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:06.160980    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:06.160995    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:06.165157    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:06.165164    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:06.180043    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:06.180055    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:06.194551    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:06.194564    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:06.205920    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:06.205932    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:06.218724    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:06.218735    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:06.255419    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:06.255430    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:06.266889    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:06.266899    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:06.281659    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:06.281669    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:04.566782    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:45:04.566793    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:45:04.587630    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:45:04.587640    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:45:04.600228    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:45:04.600239    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:45:04.613983    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:04.613992    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:04.639527    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:45:04.639536    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:45:04.657497    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:45:04.657508    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:45:04.668911    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:45:04.668921    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:45:04.683603    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:45:04.683614    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:45:04.695203    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:45:04.695214    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:45:04.712642    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:04.712651    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:45:04.712678    4112 out.go:239] X Problems detected in kubelet:
	W0307 14:45:04.712682    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:04.712685    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:04.712689    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:04.712695    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:45:08.795117    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:13.797612    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:13.797841    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:13.813863    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:13.813954    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:13.826437    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:13.826515    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:13.837789    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:13.837861    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:13.848474    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:13.848555    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:13.861554    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:13.861620    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:13.872385    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:13.872461    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:13.882960    4258 logs.go:276] 0 containers: []
	W0307 14:45:13.882970    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:13.883022    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:13.893466    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:13.893485    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:13.893490    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:13.915818    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:13.915828    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:13.927677    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:13.927691    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:13.939229    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:13.939240    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:13.957935    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:13.957947    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:13.970421    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:13.970431    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:14.008716    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:14.008726    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:14.022915    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:14.022926    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:14.034419    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:14.034431    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:14.072512    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:14.072524    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:14.084661    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:14.084674    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:14.103203    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:14.103214    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:14.117677    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:14.117686    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:14.128862    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:14.128876    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:14.145766    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:14.145776    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:14.149949    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:14.149955    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:14.184346    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:14.184359    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:14.715524    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:16.700900    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:19.718183    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:19.718421    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:19.740687    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:45:19.740791    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:19.755554    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:45:19.755638    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:19.769188    4112 logs.go:276] 2 containers: [cf3188fb5d73 cb2252cd6914]
	I0307 14:45:19.769256    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:19.780504    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:45:19.780574    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:19.790972    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:45:19.791038    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:19.801727    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:45:19.801799    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:19.811801    4112 logs.go:276] 0 containers: []
	W0307 14:45:19.811817    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:19.811877    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:19.823024    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:45:19.823040    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:19.823044    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:45:19.857790    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:19.857882    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:19.858967    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:19.858972    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:19.894525    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:45:19.894537    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:45:19.908872    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:45:19.908883    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:45:19.920260    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:19.920271    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:19.943192    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:19.943201    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:19.947412    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:45:19.947417    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:45:19.964770    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:45:19.964783    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:45:19.976904    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:45:19.976914    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:45:19.991253    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:45:19.991262    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:45:20.002990    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:45:20.002999    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:45:20.020493    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:45:20.020504    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:45:20.031486    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:45:20.031498    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:20.042765    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:20.042790    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:45:20.042818    4112 out.go:239] X Problems detected in kubelet:
	W0307 14:45:20.042824    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:20.042827    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:20.042832    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:20.042835    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:45:21.703158    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:21.703363    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:21.729520    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:21.729623    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:21.746982    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:21.747056    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:21.760063    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:21.760141    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:21.771751    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:21.771825    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:21.782184    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:21.782253    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:21.793045    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:21.793112    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:21.805884    4258 logs.go:276] 0 containers: []
	W0307 14:45:21.805892    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:21.805949    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:21.816118    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:21.816136    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:21.816141    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:21.852646    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:21.852657    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:21.866143    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:21.866156    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:21.880702    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:21.880713    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:21.891938    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:21.891951    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:21.909414    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:21.909423    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:21.921035    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:21.921048    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:21.925104    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:21.925111    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:21.939232    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:21.939243    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:21.954575    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:21.954587    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:21.966452    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:21.966463    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:21.989816    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:21.989825    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:22.024275    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:22.024287    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:22.038933    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:22.038942    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:22.050721    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:22.050732    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:22.089372    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:22.089381    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:22.100397    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:22.100407    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:24.615176    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:29.617623    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:29.617943    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:29.653062    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:29.653211    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:29.671349    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:29.671443    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:29.686605    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:29.686674    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:29.697624    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:29.697687    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:29.708447    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:29.708521    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:29.720232    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:29.720309    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:29.731067    4258 logs.go:276] 0 containers: []
	W0307 14:45:29.731079    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:29.731135    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:29.741955    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:29.741972    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:29.741977    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:29.777750    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:29.777761    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:29.794979    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:29.794989    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:29.806553    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:29.806563    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:29.818228    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:29.818240    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:29.832559    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:29.832570    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:29.847147    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:29.847157    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:29.862211    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:29.862226    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:29.900102    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:29.900110    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:29.913842    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:29.913852    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:29.925526    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:29.925536    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:29.949509    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:29.949519    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:29.953427    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:29.953433    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:29.992014    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:29.992024    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:30.002949    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:30.002960    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:30.017158    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:30.017172    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:30.034340    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:30.034350    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:30.045026    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:32.548692    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:35.045349    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:35.045499    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:35.063047    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:45:35.063131    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:35.075896    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:45:35.075969    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:35.087327    4112 logs.go:276] 2 containers: [cf3188fb5d73 cb2252cd6914]
	I0307 14:45:35.087396    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:35.099519    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:45:35.099580    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:35.114117    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:45:35.114190    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:35.125358    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:45:35.125425    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:35.135724    4112 logs.go:276] 0 containers: []
	W0307 14:45:35.135735    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:35.135800    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:35.149373    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:45:35.149387    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:35.149392    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:35.172685    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:45:35.172695    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:35.183875    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:35.183886    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:45:35.217521    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:35.217617    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:35.218770    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:35.218777    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:35.254012    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:45:35.254025    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:45:35.265485    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:45:35.265498    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:45:35.279850    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:45:35.279863    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:45:35.291586    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:45:35.291600    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:45:35.309526    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:35.309538    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:35.314109    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:45:35.314116    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:45:35.328712    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:45:35.328723    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:45:35.342820    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:45:35.342829    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:45:35.354362    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:45:35.354375    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:45:35.365954    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:35.365963    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:45:35.365987    4112 out.go:239] X Problems detected in kubelet:
	W0307 14:45:35.365991    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:35.365995    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:35.365999    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:35.366001    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:45:37.549325    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:37.549476    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:37.569762    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:37.569846    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:37.582891    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:37.582969    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:37.593543    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:37.593610    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:37.603758    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:37.603822    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:37.614714    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:37.614781    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:37.625707    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:37.625775    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:37.639182    4258 logs.go:276] 0 containers: []
	W0307 14:45:37.639196    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:37.639258    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:37.650207    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:37.650224    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:37.650231    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:37.684675    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:37.684689    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:37.698680    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:37.698693    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:37.716720    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:37.716731    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:37.755190    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:37.755203    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:37.770647    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:37.770660    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:37.782165    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:37.782176    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:37.793500    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:37.793514    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:37.804610    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:37.804628    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:37.808590    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:37.808598    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:37.823421    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:37.823430    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:37.835960    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:37.835971    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:37.847622    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:37.847637    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:37.863148    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:37.863158    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:37.903427    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:37.903438    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:37.917312    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:37.917322    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:37.929040    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:37.929050    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:40.454575    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:45.457310    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:45.457466    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:45.477369    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:45.477468    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:45.492294    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:45.492376    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:45.504647    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:45.504720    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:45.515165    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:45.515241    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:45.529589    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:45.529664    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:45.540149    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:45.540275    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:45.550153    4258 logs.go:276] 0 containers: []
	W0307 14:45:45.550166    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:45.550225    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:45.562118    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:45.562132    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:45.562138    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:45.566556    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:45.566562    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:45.580688    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:45.580696    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:45.592766    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:45.592774    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:45.610322    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:45.610333    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:45.622839    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:45.622853    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:45.660835    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:45.660843    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:45.674762    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:45.674772    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:45.711659    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:45.711669    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:45.725401    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:45.725416    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:45.747596    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:45.747602    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:45.759869    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:45.759880    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:45.772501    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:45.772513    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:45.786860    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:45.786870    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:45.798262    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:45.798272    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:45.809671    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:45.809686    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:45.844018    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:45.844027    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:45.368044    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:48.358981    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:50.370441    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:50.370980    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:50.409388    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:45:50.409532    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:50.431594    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:45:50.431715    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:50.447276    4112 logs.go:276] 2 containers: [cf3188fb5d73 cb2252cd6914]
	I0307 14:45:50.447365    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:50.460455    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:45:50.460533    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:50.471677    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:45:50.471751    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:50.482305    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:45:50.482368    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:50.492703    4112 logs.go:276] 0 containers: []
	W0307 14:45:50.492715    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:50.492774    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:50.503671    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:45:50.503685    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:45:50.503691    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:45:50.518687    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:45:50.518697    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:45:50.536726    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:45:50.536736    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:45:50.548915    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:50.548928    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:50.573748    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:45:50.573761    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:50.589479    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:45:50.589495    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:45:50.601529    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:50.601542    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:50.606071    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:50.606077    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:50.642488    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:45:50.642499    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:45:50.661219    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:45:50.661231    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:45:50.674992    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:45:50.675006    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:45:50.686322    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:45:50.686336    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:45:50.697797    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:50.697811    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:45:50.730292    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:50.730389    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:50.731544    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:50.731554    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:45:50.731581    4112 out.go:239] X Problems detected in kubelet:
	W0307 14:45:50.731588    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:45:50.731592    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:45:50.731598    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:45:50.731601    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:45:53.361457    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:53.361755    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:53.385735    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:53.385859    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:53.402705    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:53.402782    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:53.421182    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:53.421250    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:53.432194    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:53.432264    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:53.442624    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:53.442686    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:53.453414    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:53.453486    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:53.463521    4258 logs.go:276] 0 containers: []
	W0307 14:45:53.463533    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:53.463586    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:53.480889    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:53.480907    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:53.480912    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:53.498436    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:53.498446    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:53.513371    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:53.513380    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:53.526104    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:53.526115    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:53.540412    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:53.540425    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:53.580975    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:53.580987    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:53.595901    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:53.595911    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:53.607545    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:53.607556    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:53.619409    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:53.619419    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:53.658911    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:53.658924    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:53.663573    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:53.663578    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:53.682629    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:53.682639    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:53.693919    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:53.693931    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:53.716469    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:53.716476    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:53.756001    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:53.756011    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:53.767630    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:53.767641    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:53.779045    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:53.779055    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:56.292551    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:01.294873    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:01.294956    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:01.306312    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:46:01.306393    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:00.735819    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:01.317047    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:46:01.317114    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:01.327168    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:46:01.327242    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:01.338387    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:46:01.338458    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:01.351786    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:46:01.351861    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:01.362086    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:46:01.362154    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:01.371964    4258 logs.go:276] 0 containers: []
	W0307 14:46:01.371975    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:01.372034    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:01.382641    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:46:01.382660    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:01.382666    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:01.386993    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:46:01.386999    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:46:01.404739    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:46:01.404752    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:46:01.416389    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:46:01.416400    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:46:01.427945    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:46:01.427956    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:01.440629    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:46:01.440639    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:46:01.455541    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:46:01.455552    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:46:01.493113    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:46:01.493123    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:46:01.510020    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:46:01.510032    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:46:01.521115    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:46:01.521126    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:46:01.540566    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:46:01.540578    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:46:01.552603    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:46:01.552613    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:46:01.564070    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:01.564081    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:46:01.601116    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:01.601123    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:01.641528    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:46:01.641542    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:46:01.657139    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:46:01.657151    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:46:01.676308    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:01.676318    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:04.203078    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:05.738085    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:05.738273    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:05.758734    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:46:05.758853    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:05.774120    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:46:05.774197    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:05.786408    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:46:05.786486    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:05.797311    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:46:05.797374    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:05.807153    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:46:05.807221    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:05.817999    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:46:05.818067    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:05.828462    4112 logs.go:276] 0 containers: []
	W0307 14:46:05.828475    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:05.828537    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:05.838938    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:46:05.838957    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:05.838962    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:05.879132    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:46:05.879143    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:46:05.898091    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:46:05.898106    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:46:05.911526    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:46:05.911538    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:46:05.929243    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:05.929255    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:05.952712    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:05.952723    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:46:05.985804    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:05.985897    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:05.987049    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:05.987057    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:05.992077    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:46:05.992085    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:46:06.006898    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:46:06.006908    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:46:06.019142    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:46:06.019153    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:46:06.033540    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:46:06.033551    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:46:06.045578    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:46:06.045590    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:06.057525    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:46:06.057535    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:46:06.072326    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:46:06.072339    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:46:06.083709    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:46:06.083719    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:46:06.094913    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:06.094923    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:46:06.094951    4112 out.go:239] X Problems detected in kubelet:
	W0307 14:46:06.094957    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:06.094960    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:06.094964    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:06.094967    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:46:09.205386    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:09.205593    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:09.217570    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:46:09.217652    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:09.227791    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:46:09.227862    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:09.238788    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:46:09.238861    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:09.249412    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:46:09.249483    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:09.259309    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:46:09.259379    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:09.274212    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:46:09.274286    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:09.283759    4258 logs.go:276] 0 containers: []
	W0307 14:46:09.283770    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:09.283827    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:09.294230    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:46:09.294248    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:46:09.294254    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:46:09.305988    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:46:09.305999    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:46:09.317570    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:09.317584    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:09.357941    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:46:09.357952    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:46:09.375181    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:46:09.375193    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:46:09.406149    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:09.406168    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:09.430893    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:09.430908    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:46:09.467817    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:09.467829    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:09.472008    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:46:09.472014    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:46:09.483139    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:46:09.483150    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:46:09.495576    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:46:09.495588    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:46:09.506777    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:46:09.506790    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:09.518770    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:46:09.518784    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:46:09.532860    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:46:09.532869    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:46:09.570732    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:46:09.570744    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:46:09.585287    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:46:09.585299    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:46:09.600239    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:46:09.600250    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:46:12.113111    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:16.099147    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:17.115473    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:17.115647    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:17.132568    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:46:17.132650    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:17.147921    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:46:17.147996    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:17.158339    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:46:17.158410    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:17.169509    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:46:17.169578    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:17.180434    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:46:17.180504    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:17.191300    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:46:17.191370    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:17.201805    4258 logs.go:276] 0 containers: []
	W0307 14:46:17.201818    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:17.201881    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:17.212079    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:46:17.212096    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:17.212101    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:17.234929    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:46:17.234937    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:46:17.249275    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:46:17.249287    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:46:17.263300    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:46:17.263312    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:46:17.275527    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:46:17.275536    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:46:17.287484    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:46:17.287494    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:46:17.298351    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:46:17.298362    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:17.315141    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:17.315152    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:17.319468    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:17.319478    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:17.353103    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:46:17.353115    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:46:17.367756    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:46:17.367768    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:46:17.399239    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:46:17.399249    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:46:17.412038    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:46:17.412048    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:46:17.423550    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:17.423561    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:46:17.460748    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:46:17.460761    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:46:17.476551    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:46:17.476560    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:46:17.516799    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:46:17.516810    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:46:20.033194    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:21.101520    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:21.101710    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:21.134589    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:46:21.134694    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:21.151868    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:46:21.151945    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:21.164907    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:46:21.164988    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:21.176076    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:46:21.176143    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:21.186730    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:46:21.186808    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:21.197765    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:46:21.197837    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:21.208434    4112 logs.go:276] 0 containers: []
	W0307 14:46:21.208444    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:21.208495    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:21.218635    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:46:21.218657    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:46:21.218662    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:21.230939    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:21.230952    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:21.236181    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:46:21.236193    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:46:21.250797    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:46:21.250807    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:46:21.262346    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:46:21.262360    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:46:21.274219    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:46:21.274231    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:46:21.291753    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:46:21.291764    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:46:21.303418    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:21.303429    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:21.329425    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:21.329434    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:21.365104    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:46:21.365120    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:46:21.377305    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:46:21.377318    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:46:21.388923    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:46:21.388935    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:46:21.403716    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:46:21.403728    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:46:21.416375    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:21.416386    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:46:21.449778    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:21.449871    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:21.450959    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:46:21.450966    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:46:21.466308    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:21.466319    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:46:21.466348    4112 out.go:239] X Problems detected in kubelet:
	W0307 14:46:21.466354    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:21.466358    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:21.466361    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:21.466366    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:46:25.035575    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:25.035811    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:25.056743    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:46:25.056845    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:25.076053    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:46:25.076134    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:25.087849    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:46:25.087926    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:25.098554    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:46:25.098626    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:25.108895    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:46:25.108966    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:25.119522    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:46:25.119588    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:25.130140    4258 logs.go:276] 0 containers: []
	W0307 14:46:25.130151    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:25.130211    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:25.141613    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:46:25.141632    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:25.141638    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:46:25.179569    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:46:25.179579    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:46:25.224231    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:46:25.224242    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:46:25.238809    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:46:25.238823    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:46:25.253750    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:46:25.253760    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:46:25.267530    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:46:25.267540    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:46:25.278748    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:46:25.278761    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:46:25.289894    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:46:25.289906    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:46:25.301190    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:46:25.301200    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:46:25.312463    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:25.312474    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:25.347888    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:46:25.347899    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:46:25.361452    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:46:25.361461    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:46:25.378932    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:46:25.378942    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:46:25.391585    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:46:25.391596    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:46:25.403173    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:25.403184    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:25.426365    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:25.426374    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:25.430653    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:46:25.430658    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:27.944403    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:31.468690    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:32.944874    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:32.945186    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:32.982912    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:46:32.983037    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:33.003008    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:46:33.003099    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:33.015659    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:46:33.015732    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:33.027002    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:46:33.027080    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:33.038107    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:46:33.038175    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:33.048612    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:46:33.048680    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:33.059099    4258 logs.go:276] 0 containers: []
	W0307 14:46:33.059112    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:33.059172    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:33.069542    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:46:33.069559    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:33.069564    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:46:33.106708    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:46:33.106716    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:46:33.121281    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:46:33.121293    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:46:33.133585    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:46:33.133595    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:46:33.145548    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:46:33.145559    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:33.157721    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:33.157733    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:33.163607    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:46:33.163615    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:46:33.178448    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:46:33.178459    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:46:33.196354    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:46:33.196366    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:46:33.209132    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:33.209145    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:33.230704    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:46:33.230712    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:46:33.272427    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:46:33.272437    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:46:33.286379    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:46:33.286390    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:46:33.300885    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:46:33.300896    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:46:33.315622    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:46:33.315632    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:46:33.327713    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:33.327724    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:33.362370    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:46:33.362383    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:46:35.886442    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:36.470985    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:36.471129    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:36.482543    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:46:36.482615    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:36.493094    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:46:36.493179    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:36.503276    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:46:36.503347    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:36.513732    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:46:36.513804    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:36.523881    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:46:36.523947    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:36.534889    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:46:36.534961    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:36.545257    4112 logs.go:276] 0 containers: []
	W0307 14:46:36.545269    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:36.545330    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:36.556322    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:46:36.556339    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:46:36.556345    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:46:36.573621    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:46:36.573632    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:46:36.585084    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:36.585095    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:36.619627    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:46:36.619642    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:46:36.634247    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:46:36.634259    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:46:36.646030    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:46:36.646041    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:46:36.657366    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:46:36.657378    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:46:36.672578    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:46:36.672589    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:46:36.684464    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:36.684477    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:36.707657    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:46:36.707664    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:36.721096    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:46:36.721109    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:46:36.738617    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:46:36.738630    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:46:36.750446    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:36.750458    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:46:36.784012    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:36.784105    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:36.785186    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:36.785191    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:36.789796    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:46:36.789805    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:46:36.801165    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:36.801178    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:46:36.801202    4112 out.go:239] X Problems detected in kubelet:
	W0307 14:46:36.801207    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:36.801210    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:36.801214    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:36.801217    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:46:40.888801    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:40.889120    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:40.919735    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:46:40.919841    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:40.938164    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:46:40.938256    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:40.952563    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:46:40.952647    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:40.972368    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:46:40.972436    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:40.984737    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:46:40.984808    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:40.995497    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:46:40.995563    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:41.005945    4258 logs.go:276] 0 containers: []
	W0307 14:46:41.005957    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:41.006016    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:41.016087    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:46:41.016102    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:46:41.016107    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:46:41.030063    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:46:41.030074    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:46:41.044273    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:46:41.044286    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:46:41.056010    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:46:41.056022    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:41.068215    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:41.068225    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:41.102415    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:46:41.102426    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:46:41.116827    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:46:41.116839    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:46:41.135276    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:46:41.135287    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:46:41.152824    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:46:41.152835    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:46:41.190405    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:46:41.190419    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:46:41.202260    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:46:41.202272    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:46:41.213692    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:46:41.213701    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:46:41.230187    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:41.230199    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:41.252238    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:41.252246    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:41.256807    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:46:41.256820    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:46:41.272053    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:46:41.272065    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:46:41.283709    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:41.283723    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:46:43.822536    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:46.805035    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:48.824893    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:48.824940    4258 kubeadm.go:591] duration metric: took 4m3.9415295s to restartPrimaryControlPlane
	W0307 14:46:48.824990    4258 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0307 14:46:48.825007    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0307 14:46:49.888830    4258 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.063809125s)
	I0307 14:46:49.888915    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 14:46:49.893702    4258 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 14:46:49.896853    4258 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 14:46:49.899747    4258 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 14:46:49.899754    4258 kubeadm.go:156] found existing configuration files:
	
	I0307 14:46:49.899793    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0307 14:46:49.902500    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 14:46:49.902539    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 14:46:49.905311    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0307 14:46:49.907908    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 14:46:49.907935    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 14:46:49.910528    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0307 14:46:49.913198    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 14:46:49.913222    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 14:46:49.915771    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0307 14:46:49.918247    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 14:46:49.918271    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 14:46:49.921550    4258 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 14:46:49.939067    4258 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0307 14:46:49.939126    4258 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 14:46:49.994720    4258 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 14:46:49.994903    4258 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 14:46:49.995044    4258 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 14:46:50.050202    4258 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 14:46:50.055370    4258 out.go:204]   - Generating certificates and keys ...
	I0307 14:46:50.055404    4258 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 14:46:50.055434    4258 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 14:46:50.055478    4258 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 14:46:50.055516    4258 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0307 14:46:50.055557    4258 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 14:46:50.055587    4258 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0307 14:46:50.055623    4258 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0307 14:46:50.055657    4258 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0307 14:46:50.055706    4258 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 14:46:50.055744    4258 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 14:46:50.055766    4258 kubeadm.go:309] [certs] Using the existing "sa" key
	I0307 14:46:50.055796    4258 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 14:46:50.286405    4258 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 14:46:50.359465    4258 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 14:46:50.445586    4258 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 14:46:50.615872    4258 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 14:46:50.646038    4258 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 14:46:50.646350    4258 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 14:46:50.646389    4258 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 14:46:50.715742    4258 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 14:46:50.719938    4258 out.go:204]   - Booting up control plane ...
	I0307 14:46:50.720098    4258 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 14:46:50.720174    4258 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 14:46:50.720220    4258 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 14:46:50.720264    4258 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 14:46:50.720346    4258 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 14:46:51.807306    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:51.807414    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:51.819320    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:46:51.819400    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:51.830855    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:46:51.830937    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:51.843505    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:46:51.843588    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:51.855522    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:46:51.855601    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:51.867380    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:46:51.867461    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:51.880389    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:46:51.880472    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:51.892672    4112 logs.go:276] 0 containers: []
	W0307 14:46:51.892685    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:51.892754    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:51.906551    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:46:51.906569    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:46:51.906575    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:46:51.923608    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:46:51.923620    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:46:51.941820    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:46:51.941831    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:46:51.957772    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:51.957784    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:51.997544    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:46:51.997556    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:46:52.010030    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:46:52.010041    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:46:52.048757    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:52.048768    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:52.079551    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:52.079564    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:46:52.115211    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:52.115306    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:52.116459    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:46:52.116471    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:46:52.128292    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:46:52.128305    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:46:52.140707    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:46:52.140719    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:52.152783    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:52.152795    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:52.157173    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:46:52.157183    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:46:52.174646    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:46:52.174657    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:46:52.187550    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:46:52.187562    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:46:52.200957    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:52.200968    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:46:52.200995    4112 out.go:239] X Problems detected in kubelet:
	W0307 14:46:52.201000    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:46:52.201005    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:46:52.201010    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:46:52.201015    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:46:55.224161    4258 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504878 seconds
	I0307 14:46:55.224224    4258 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 14:46:55.227984    4258 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 14:46:55.749107    4258 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 14:46:55.749433    4258 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-491000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 14:46:56.252933    4258 kubeadm.go:309] [bootstrap-token] Using token: 8ib4o4.pzgh3dxctohekoxl
	I0307 14:46:56.259535    4258 out.go:204]   - Configuring RBAC rules ...
	I0307 14:46:56.259603    4258 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 14:46:56.259662    4258 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 14:46:56.261693    4258 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 14:46:56.266081    4258 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 14:46:56.266904    4258 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 14:46:56.267974    4258 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 14:46:56.271269    4258 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 14:46:56.437841    4258 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 14:46:56.660214    4258 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 14:46:56.660707    4258 kubeadm.go:309] 
	I0307 14:46:56.660738    4258 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 14:46:56.660742    4258 kubeadm.go:309] 
	I0307 14:46:56.660783    4258 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 14:46:56.660786    4258 kubeadm.go:309] 
	I0307 14:46:56.660796    4258 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 14:46:56.660834    4258 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 14:46:56.660863    4258 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 14:46:56.660866    4258 kubeadm.go:309] 
	I0307 14:46:56.660894    4258 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 14:46:56.660898    4258 kubeadm.go:309] 
	I0307 14:46:56.660924    4258 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 14:46:56.660929    4258 kubeadm.go:309] 
	I0307 14:46:56.660953    4258 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 14:46:56.660991    4258 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 14:46:56.661026    4258 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 14:46:56.661029    4258 kubeadm.go:309] 
	I0307 14:46:56.661068    4258 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 14:46:56.661114    4258 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 14:46:56.661120    4258 kubeadm.go:309] 
	I0307 14:46:56.661159    4258 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 8ib4o4.pzgh3dxctohekoxl \
	I0307 14:46:56.661232    4258 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dc223c8f61ede9166cd6af749ad18625b70792e87215b933ce92e49a8a2f5e5b \
	I0307 14:46:56.661246    4258 kubeadm.go:309] 	--control-plane 
	I0307 14:46:56.661249    4258 kubeadm.go:309] 
	I0307 14:46:56.661298    4258 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 14:46:56.661303    4258 kubeadm.go:309] 
	I0307 14:46:56.661359    4258 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 8ib4o4.pzgh3dxctohekoxl \
	I0307 14:46:56.661412    4258 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dc223c8f61ede9166cd6af749ad18625b70792e87215b933ce92e49a8a2f5e5b 
	I0307 14:46:56.661655    4258 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 14:46:56.661772    4258 cni.go:84] Creating CNI manager for ""
	I0307 14:46:56.661781    4258 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:46:56.668217    4258 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 14:46:56.676236    4258 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 14:46:56.679220    4258 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 14:46:56.684294    4258 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 14:46:56.684349    4258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 14:46:56.684357    4258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-491000 minikube.k8s.io/updated_at=2024_03_07T14_46_56_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3e3656b8cff33aafa60dd2a07a4b34bce666a6a6 minikube.k8s.io/name=stopped-upgrade-491000 minikube.k8s.io/primary=true
	I0307 14:46:56.687256    4258 ops.go:34] apiserver oom_adj: -16
	I0307 14:46:56.725516    4258 kubeadm.go:1106] duration metric: took 41.2035ms to wait for elevateKubeSystemPrivileges
	W0307 14:46:56.725554    4258 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 14:46:56.725560    4258 kubeadm.go:393] duration metric: took 4m11.8557655s to StartCluster
	I0307 14:46:56.725570    4258 settings.go:142] acquiring lock: {Name:mke1b21879aa30c53aba22e595b1eacff0880bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:46:56.725678    4258 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:46:56.726107    4258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/kubeconfig: {Name:mk8faa4add6483df163f8e33c2463984a39a544c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:46:56.726314    4258 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:46:56.729138    4258 out.go:177] * Verifying Kubernetes components...
	I0307 14:46:56.726367    4258 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 14:46:56.726485    4258 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:46:56.737222    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:46:56.737238    4258 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-491000"
	I0307 14:46:56.737241    4258 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-491000"
	I0307 14:46:56.737252    4258 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-491000"
	W0307 14:46:56.737255    4258 addons.go:243] addon storage-provisioner should already be in state true
	I0307 14:46:56.737256    4258 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-491000"
	I0307 14:46:56.737266    4258 host.go:66] Checking if "stopped-upgrade-491000" exists ...
	I0307 14:46:56.738663    4258 kapi.go:59] client config for stopped-upgrade-491000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/client.key", CAFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c7b6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 14:46:56.738778    4258 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-491000"
	W0307 14:46:56.738783    4258 addons.go:243] addon default-storageclass should already be in state true
	I0307 14:46:56.738791    4258 host.go:66] Checking if "stopped-upgrade-491000" exists ...
	I0307 14:46:56.743164    4258 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:46:56.739468    4258 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 14:46:56.747162    4258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 14:46:56.747170    4258 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/id_rsa Username:docker}
	I0307 14:46:56.747208    4258 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 14:46:56.747214    4258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 14:46:56.747218    4258 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/id_rsa Username:docker}
	W0307 14:46:56.747822    4258 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50688->127.0.0.1:50471: read: connection reset by peer
	I0307 14:46:56.747840    4258 retry.go:31] will retry after 270.011748ms: ssh: handshake failed: read tcp 127.0.0.1:50688->127.0.0.1:50471: read: connection reset by peer
	I0307 14:46:56.812726    4258 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 14:46:56.817356    4258 api_server.go:52] waiting for apiserver process to appear ...
	I0307 14:46:56.817398    4258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 14:46:56.820948    4258 api_server.go:72] duration metric: took 94.623416ms to wait for apiserver process to appear ...
	I0307 14:46:56.820955    4258 api_server.go:88] waiting for apiserver healthz status ...
	I0307 14:46:56.820961    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:56.867347    4258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 14:46:57.055196    4258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 14:47:02.204760    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:01.823166    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:01.823231    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:07.206984    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:07.207094    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:47:07.218618    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:47:07.218692    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:47:07.229584    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:47:07.229651    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:47:07.240368    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:47:07.240450    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:47:07.251801    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:47:07.251869    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:47:07.263555    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:47:07.263628    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:47:07.273860    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:47:07.273933    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:47:07.284215    4112 logs.go:276] 0 containers: []
	W0307 14:47:07.284228    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:47:07.284287    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:47:07.297527    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:47:07.297549    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:47:07.297554    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:47:07.311953    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:47:07.311964    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:47:07.323117    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:47:07.323129    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:47:07.334602    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:47:07.334616    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:47:07.350357    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:47:07.350368    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:47:07.363603    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:47:07.363615    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:47:07.368233    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:47:07.368240    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:47:07.402107    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:47:07.402118    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:47:07.417070    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:47:07.417082    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:47:07.442393    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:47:07.442401    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:47:07.477676    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:47:07.477781    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:47:07.478939    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:47:07.478944    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:47:07.493986    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:47:07.493997    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:47:07.511716    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:47:07.511730    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:47:07.526461    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:47:07.526471    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:47:07.542654    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:47:07.542664    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:47:07.554876    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:47:07.554886    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:47:07.554914    4112 out.go:239] X Problems detected in kubelet:
	W0307 14:47:07.554918    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:47:07.554921    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:47:07.554925    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:47:07.554927    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:47:06.823842    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:06.823906    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:11.824444    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:11.824497    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:17.558532    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:16.824962    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:16.824999    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:22.560857    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:22.560986    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:47:22.574702    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:47:22.574796    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:47:22.586534    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:47:22.586598    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:47:22.597391    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:47:22.597459    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:47:22.607979    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:47:22.608049    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:47:22.618904    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:47:22.618967    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:47:22.629923    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:47:22.629993    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:47:22.640191    4112 logs.go:276] 0 containers: []
	W0307 14:47:22.640206    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:47:22.640275    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:47:22.653097    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:47:22.653113    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:47:22.653118    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:47:22.668204    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:47:22.668214    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:47:22.679612    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:47:22.679625    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:47:22.684736    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:47:22.684747    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:47:22.721979    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:47:22.721990    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:47:22.735287    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:47:22.735298    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:47:22.747660    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:47:22.747673    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:47:22.759848    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:47:22.759861    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:47:22.771340    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:47:22.771351    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:47:22.789213    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:47:22.789223    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:47:22.813865    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:47:22.813874    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:47:22.848414    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:47:22.848508    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:47:22.849626    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:47:22.849633    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:47:22.865430    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:47:22.865444    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:47:22.880093    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:47:22.880105    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:47:22.893038    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:47:22.893049    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:47:22.908461    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:47:22.908470    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:47:22.908495    4112 out.go:239] X Problems detected in kubelet:
	W0307 14:47:22.908499    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:47:22.908518    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:47:22.908526    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:47:22.908530    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:47:21.825869    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:21.825905    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:26.826859    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:26.826895    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0307 14:47:27.198770    4258 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0307 14:47:27.203824    4258 out.go:177] * Enabled addons: storage-provisioner
	I0307 14:47:27.215736    4258 addons.go:505] duration metric: took 30.489340917s for enable addons: enabled=[storage-provisioner]
	I0307 14:47:32.912689    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:31.828183    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:31.828285    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:37.914938    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:37.915190    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:47:37.943144    4112 logs.go:276] 1 containers: [61a70556bb46]
	I0307 14:47:37.943271    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:47:37.960849    4112 logs.go:276] 1 containers: [c15538203825]
	I0307 14:47:37.960934    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:47:37.974492    4112 logs.go:276] 4 containers: [72d00845cf74 2aafa15e68eb cf3188fb5d73 cb2252cd6914]
	I0307 14:47:37.974564    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:47:37.985661    4112 logs.go:276] 1 containers: [2e748d15bfb3]
	I0307 14:47:37.985739    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:47:37.995609    4112 logs.go:276] 1 containers: [8f961ef86106]
	I0307 14:47:37.995682    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:47:38.009663    4112 logs.go:276] 1 containers: [371d1eb91aa4]
	I0307 14:47:38.009730    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:47:38.026665    4112 logs.go:276] 0 containers: []
	W0307 14:47:38.026677    4112 logs.go:278] No container was found matching "kindnet"
	I0307 14:47:38.026737    4112 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:47:38.037081    4112 logs.go:276] 1 containers: [c1b57edc8d85]
	I0307 14:47:38.037098    4112 logs.go:123] Gathering logs for kube-scheduler [2e748d15bfb3] ...
	I0307 14:47:38.037104    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e748d15bfb3"
	I0307 14:47:38.051788    4112 logs.go:123] Gathering logs for storage-provisioner [c1b57edc8d85] ...
	I0307 14:47:38.051808    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b57edc8d85"
	I0307 14:47:38.063961    4112 logs.go:123] Gathering logs for Docker ...
	I0307 14:47:38.063971    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:47:38.087463    4112 logs.go:123] Gathering logs for kubelet ...
	I0307 14:47:38.087471    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 14:47:38.120759    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:47:38.120852    4112 logs.go:138] Found kubelet problem: Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:47:38.122009    4112 logs.go:123] Gathering logs for etcd [c15538203825] ...
	I0307 14:47:38.122018    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c15538203825"
	I0307 14:47:38.137512    4112 logs.go:123] Gathering logs for coredns [72d00845cf74] ...
	I0307 14:47:38.137523    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72d00845cf74"
	I0307 14:47:38.149393    4112 logs.go:123] Gathering logs for dmesg ...
	I0307 14:47:38.149403    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:47:38.153997    4112 logs.go:123] Gathering logs for kube-apiserver [61a70556bb46] ...
	I0307 14:47:38.154005    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a70556bb46"
	I0307 14:47:38.168744    4112 logs.go:123] Gathering logs for coredns [cb2252cd6914] ...
	I0307 14:47:38.168755    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2252cd6914"
	I0307 14:47:38.180906    4112 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:47:38.180919    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:47:38.218099    4112 logs.go:123] Gathering logs for container status ...
	I0307 14:47:38.218111    4112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:47:38.230889    4112 logs.go:123] Gathering logs for kube-controller-manager [371d1eb91aa4] ...
	I0307 14:47:38.230900    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371d1eb91aa4"
	I0307 14:47:38.249713    4112 logs.go:123] Gathering logs for coredns [2aafa15e68eb] ...
	I0307 14:47:38.249724    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aafa15e68eb"
	I0307 14:47:38.261809    4112 logs.go:123] Gathering logs for coredns [cf3188fb5d73] ...
	I0307 14:47:38.261820    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf3188fb5d73"
	I0307 14:47:38.275700    4112 logs.go:123] Gathering logs for kube-proxy [8f961ef86106] ...
	I0307 14:47:38.275712    4112 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f961ef86106"
	I0307 14:47:38.288004    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:47:38.288016    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 14:47:38.288045    4112 out.go:239] X Problems detected in kubelet:
	W0307 14:47:38.288049    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	W0307 14:47:38.288083    4112 out.go:239]   Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	I0307 14:47:38.288105    4112 out.go:304] Setting ErrFile to fd 2...
	I0307 14:47:38.288145    4112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:47:36.830170    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:36.830213    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:41.832403    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:41.832495    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:48.291076    4112 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:46.833865    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:46.833887    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:53.293373    4112 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:53.297973    4112 out.go:177] 
	W0307 14:47:53.301937    4112 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0307 14:47:53.301947    4112 out.go:239] * 
	W0307 14:47:53.302748    4112 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:47:53.316953    4112 out.go:177] 
	I0307 14:47:51.836119    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:51.836167    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:56.838593    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:56.838792    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:47:56.854314    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:47:56.854417    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:47:56.866551    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:47:56.866618    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:47:56.877131    4258 logs.go:276] 2 containers: [dc33544dead5 cb97979a251a]
	I0307 14:47:56.877201    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:47:56.887713    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:47:56.887778    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:47:56.898306    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:47:56.898370    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:47:56.908753    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:47:56.908830    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:47:56.919241    4258 logs.go:276] 0 containers: []
	W0307 14:47:56.919253    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:47:56.919311    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:47:56.929786    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:47:56.929806    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:47:56.929811    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:47:56.946097    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:47:56.946107    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:47:56.957630    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:47:56.957643    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:47:56.994159    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:47:56.994169    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:47:56.998299    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:47:56.998305    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:47:57.033350    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:47:57.033362    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:47:57.048478    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:47:57.048490    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:47:57.063366    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:47:57.063376    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:47:57.075412    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:47:57.075426    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:47:57.087384    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:47:57.087395    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:47:57.112246    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:47:57.112255    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:47:57.125036    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:47:57.125046    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:47:57.137385    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:47:57.137396    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:47:59.657490    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:48:04.659080    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:48:04.659208    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:48:04.672749    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:48:04.672830    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:48:04.683596    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:48:04.683667    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:48:04.693879    4258 logs.go:276] 2 containers: [dc33544dead5 cb97979a251a]
	I0307 14:48:04.693945    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:48:04.704337    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:48:04.704409    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:48:04.715601    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:48:04.715669    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:48:04.729435    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:48:04.729509    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:48:04.739779    4258 logs.go:276] 0 containers: []
	W0307 14:48:04.739791    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:48:04.739852    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:48:04.750569    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:48:04.750583    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:48:04.750588    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:48:04.762164    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:48:04.762175    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:48:04.779138    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:48:04.779147    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:48:04.814694    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:48:04.814702    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:48:04.819014    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:48:04.819023    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:48:04.854325    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:48:04.854337    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:48:04.866334    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:48:04.866343    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:48:04.877776    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:48:04.877788    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:48:04.889360    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:48:04.889371    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:48:04.903328    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:48:04.903337    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:48:04.919184    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:48:04.919195    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:48:04.934142    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:48:04.934151    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:48:04.945365    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:48:04.945378    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-03-07 22:38:53 UTC, ends at Thu 2024-03-07 22:48:09 UTC. --
	Mar 07 22:47:50 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:50Z" level=error msg="ContainerStats resp: {0x4000988480 linux}"
	Mar 07 22:47:50 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:50Z" level=error msg="ContainerStats resp: {0x40009885c0 linux}"
	Mar 07 22:47:50 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:50Z" level=error msg="ContainerStats resp: {<nil> }"
	Mar 07 22:47:50 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:50Z" level=error msg="Error response from daemon: No such container: cf3188fb5d7346f5b730ad7fb74f7febe779d149fc93eb7633a58d191d719954 Failed to get stats from container cf3188fb5d7346f5b730ad7fb74f7febe779d149fc93eb7633a58d191d719954"
	Mar 07 22:47:51 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:51Z" level=error msg="ContainerStats resp: {0x40003fed00 linux}"
	Mar 07 22:47:52 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:52Z" level=error msg="ContainerStats resp: {0x40003ff4c0 linux}"
	Mar 07 22:47:52 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:52Z" level=error msg="ContainerStats resp: {0x40003ff900 linux}"
	Mar 07 22:47:52 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:52Z" level=error msg="ContainerStats resp: {0x40009a1680 linux}"
	Mar 07 22:47:52 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:52Z" level=error msg="ContainerStats resp: {0x40009a1b80 linux}"
	Mar 07 22:47:52 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:52Z" level=error msg="ContainerStats resp: {0x40009700c0 linux}"
	Mar 07 22:47:52 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:52Z" level=error msg="ContainerStats resp: {0x400007e480 linux}"
	Mar 07 22:47:52 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:52Z" level=error msg="ContainerStats resp: {0x4000970740 linux}"
	Mar 07 22:47:54 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:54Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 07 22:47:59 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:47:59Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 07 22:48:02 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:48:02Z" level=error msg="ContainerStats resp: {0x40003ffc80 linux}"
	Mar 07 22:48:02 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:48:02Z" level=error msg="ContainerStats resp: {0x40003ffe00 linux}"
	Mar 07 22:48:03 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:48:03Z" level=error msg="ContainerStats resp: {0x40007e9400 linux}"
	Mar 07 22:48:04 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:48:04Z" level=error msg="ContainerStats resp: {0x400099e400 linux}"
	Mar 07 22:48:04 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:48:04Z" level=error msg="ContainerStats resp: {0x400099e540 linux}"
	Mar 07 22:48:04 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:48:04Z" level=error msg="ContainerStats resp: {0x4000357200 linux}"
	Mar 07 22:48:04 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:48:04Z" level=error msg="ContainerStats resp: {0x400099ee00 linux}"
	Mar 07 22:48:04 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:48:04Z" level=error msg="ContainerStats resp: {0x400099f480 linux}"
	Mar 07 22:48:04 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:48:04Z" level=error msg="ContainerStats resp: {0x400007e900 linux}"
	Mar 07 22:48:04 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:48:04Z" level=error msg="ContainerStats resp: {0x400007eec0 linux}"
	Mar 07 22:48:04 running-upgrade-974000 cri-dockerd[3058]: time="2024-03-07T22:48:04Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	a019099467964       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   580c51f3c572e
	980aa133e2c9a       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   b9347267218e3
	72d00845cf748       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   580c51f3c572e
	2aafa15e68eb6       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b9347267218e3
	c1b57edc8d854       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   e7f4478542695
	8f961ef86106d       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   37a97e1bc7108
	61a70556bb46b       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   f39242281d8bb
	371d1eb91aa4e       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   70128eaccf8ed
	c155382038257       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   8f1318c1c27a1
	2e748d15bfb3e       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   e674cece6c419
	
	
	==> coredns [2aafa15e68eb] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6053482387932000695.3849913404555691207. HINFO: read udp 10.244.0.2:36377->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6053482387932000695.3849913404555691207. HINFO: read udp 10.244.0.2:46823->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6053482387932000695.3849913404555691207. HINFO: read udp 10.244.0.2:36829->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6053482387932000695.3849913404555691207. HINFO: read udp 10.244.0.2:60282->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [72d00845cf74] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7638821219499108952.192092692745880306. HINFO: read udp 10.244.0.3:36367->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7638821219499108952.192092692745880306. HINFO: read udp 10.244.0.3:37669->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7638821219499108952.192092692745880306. HINFO: read udp 10.244.0.3:48204->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7638821219499108952.192092692745880306. HINFO: read udp 10.244.0.3:53329->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7638821219499108952.192092692745880306. HINFO: read udp 10.244.0.3:44734->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7638821219499108952.192092692745880306. HINFO: read udp 10.244.0.3:43290->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7638821219499108952.192092692745880306. HINFO: read udp 10.244.0.3:58273->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7638821219499108952.192092692745880306. HINFO: read udp 10.244.0.3:36689->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7638821219499108952.192092692745880306. HINFO: read udp 10.244.0.3:49576->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7638821219499108952.192092692745880306. HINFO: read udp 10.244.0.3:49250->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [980aa133e2c9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7429893340747378323.7251797740421918147. HINFO: read udp 10.244.0.2:45802->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7429893340747378323.7251797740421918147. HINFO: read udp 10.244.0.2:40247->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7429893340747378323.7251797740421918147. HINFO: read udp 10.244.0.2:53841->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7429893340747378323.7251797740421918147. HINFO: read udp 10.244.0.2:51894->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7429893340747378323.7251797740421918147. HINFO: read udp 10.244.0.2:35011->10.0.2.3:53: i/o timeout
	
	
	==> coredns [a01909946796] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5584358308721105380.8044452907223539787. HINFO: read udp 10.244.0.3:59433->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5584358308721105380.8044452907223539787. HINFO: read udp 10.244.0.3:41350->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5584358308721105380.8044452907223539787. HINFO: read udp 10.244.0.3:53772->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5584358308721105380.8044452907223539787. HINFO: read udp 10.244.0.3:54418->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5584358308721105380.8044452907223539787. HINFO: read udp 10.244.0.3:54752->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-974000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-974000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e3656b8cff33aafa60dd2a07a4b34bce666a6a6
	                    minikube.k8s.io/name=running-upgrade-974000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T14_43_48_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 22:43:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-974000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 22:48:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 22:43:48 +0000   Thu, 07 Mar 2024 22:43:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 22:43:48 +0000   Thu, 07 Mar 2024 22:43:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 22:43:48 +0000   Thu, 07 Mar 2024 22:43:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 22:43:48 +0000   Thu, 07 Mar 2024 22:43:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-974000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 15ebebd423454a3f928e99885f555ddf
	  System UUID:                15ebebd423454a3f928e99885f555ddf
	  Boot ID:                    6a3d4486-4ea9-425c-ad6f-d2969ceb9bce
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-4f2qt                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m7s
	  kube-system                 coredns-6d4b75cb6d-lrqcd                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m7s
	  kube-system                 etcd-running-upgrade-974000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-apiserver-running-upgrade-974000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-controller-manager-running-upgrade-974000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-proxy-rvds2                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-974000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m6s   kube-proxy       
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-974000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-974000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-974000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-974000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s   node-controller  Node running-upgrade-974000 event: Registered Node running-upgrade-974000 in Controller
	
	
	==> dmesg <==
	[  +1.681324] systemd-fstab-generator[877]: Ignoring "noauto" for root device
	[  +0.081561] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.077228] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +1.140848] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.087038] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.074889] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[  +2.482334] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[ +14.188364] systemd-fstab-generator[1944]: Ignoring "noauto" for root device
	[  +2.658588] systemd-fstab-generator[2221]: Ignoring "noauto" for root device
	[  +0.150566] systemd-fstab-generator[2254]: Ignoring "noauto" for root device
	[  +0.097164] systemd-fstab-generator[2265]: Ignoring "noauto" for root device
	[  +0.096398] systemd-fstab-generator[2278]: Ignoring "noauto" for root device
	[  +2.487733] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.202945] systemd-fstab-generator[3013]: Ignoring "noauto" for root device
	[  +0.086228] systemd-fstab-generator[3026]: Ignoring "noauto" for root device
	[  +0.083463] systemd-fstab-generator[3037]: Ignoring "noauto" for root device
	[  +0.101060] systemd-fstab-generator[3051]: Ignoring "noauto" for root device
	[  +2.043303] systemd-fstab-generator[3206]: Ignoring "noauto" for root device
	[  +6.159437] systemd-fstab-generator[3621]: Ignoring "noauto" for root device
	[  +1.013157] systemd-fstab-generator[3748]: Ignoring "noauto" for root device
	[ +19.005925] kauditd_printk_skb: 68 callbacks suppressed
	[Mar 7 22:43] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.284197] systemd-fstab-generator[11919]: Ignoring "noauto" for root device
	[  +5.634942] systemd-fstab-generator[12524]: Ignoring "noauto" for root device
	[  +0.459809] systemd-fstab-generator[12653]: Ignoring "noauto" for root device
	
	
	==> etcd [c15538203825] <==
	{"level":"info","ts":"2024-03-07T22:43:44.176Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-07T22:43:44.177Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-07T22:43:44.177Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-07T22:43:44.177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-07T22:43:44.177Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-07T22:43:44.177Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-07T22:43:44.177Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-07T22:43:44.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-07T22:43:44.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-07T22:43:44.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-07T22:43:44.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-07T22:43:44.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-07T22:43:44.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-07T22:43:44.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-07T22:43:44.918Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T22:43:44.919Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T22:43:44.919Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T22:43:44.919Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T22:43:44.919Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-974000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-07T22:43:44.919Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T22:43:44.919Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-07T22:43:44.919Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-07T22:43:44.919Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T22:43:44.920Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-07T22:43:44.920Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 22:48:09 up 9 min,  0 users,  load average: 0.29, 0.32, 0.20
	Linux running-upgrade-974000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [61a70556bb46] <==
	I0307 22:43:46.116321       1 controller.go:611] quota admission added evaluator for: namespaces
	I0307 22:43:46.151158       1 cache.go:39] Caches are synced for autoregister controller
	I0307 22:43:46.151257       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0307 22:43:46.154780       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0307 22:43:46.154799       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0307 22:43:46.154877       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0307 22:43:46.169476       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0307 22:43:46.889311       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0307 22:43:47.053478       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0307 22:43:47.054640       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0307 22:43:47.054649       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0307 22:43:47.199484       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0307 22:43:47.210654       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0307 22:43:47.320864       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0307 22:43:47.322814       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0307 22:43:47.323199       1 controller.go:611] quota admission added evaluator for: endpoints
	I0307 22:43:47.324362       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0307 22:43:48.186800       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0307 22:43:48.711939       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0307 22:43:48.715221       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0307 22:43:48.740499       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0307 22:43:48.766854       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0307 22:44:01.541511       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0307 22:44:01.890660       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0307 22:44:02.703257       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [371d1eb91aa4] <==
	W0307 22:44:01.016372       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-974000. Assuming now as a timestamp.
	I0307 22:44:01.016397       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0307 22:44:01.016448       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0307 22:44:01.016519       1 event.go:294] "Event occurred" object="running-upgrade-974000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-974000 event: Registered Node running-upgrade-974000 in Controller"
	I0307 22:44:01.018128       1 shared_informer.go:262] Caches are synced for ephemeral
	I0307 22:44:01.026890       1 shared_informer.go:262] Caches are synced for namespace
	I0307 22:44:01.037782       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0307 22:44:01.038923       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0307 22:44:01.039157       1 shared_informer.go:262] Caches are synced for HPA
	I0307 22:44:01.039168       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0307 22:44:01.039322       1 shared_informer.go:262] Caches are synced for daemon sets
	I0307 22:44:01.039330       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0307 22:44:01.040104       1 shared_informer.go:262] Caches are synced for crt configmap
	I0307 22:44:01.040173       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0307 22:44:01.042635       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0307 22:44:01.200211       1 shared_informer.go:262] Caches are synced for resource quota
	I0307 22:44:01.220445       1 shared_informer.go:262] Caches are synced for attach detach
	I0307 22:44:01.241644       1 shared_informer.go:262] Caches are synced for resource quota
	I0307 22:44:01.542967       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0307 22:44:01.656995       1 shared_informer.go:262] Caches are synced for garbage collector
	I0307 22:44:01.688768       1 shared_informer.go:262] Caches are synced for garbage collector
	I0307 22:44:01.688777       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0307 22:44:01.893834       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rvds2"
	I0307 22:44:02.041881       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-4f2qt"
	I0307 22:44:02.043988       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-lrqcd"
	
	
	==> kube-proxy [8f961ef86106] <==
	I0307 22:44:02.689196       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0307 22:44:02.689270       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0307 22:44:02.689331       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0307 22:44:02.701099       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0307 22:44:02.701111       1 server_others.go:206] "Using iptables Proxier"
	I0307 22:44:02.701145       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0307 22:44:02.701274       1 server.go:661] "Version info" version="v1.24.1"
	I0307 22:44:02.701321       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 22:44:02.701662       1 config.go:317] "Starting service config controller"
	I0307 22:44:02.701705       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0307 22:44:02.701718       1 config.go:226] "Starting endpoint slice config controller"
	I0307 22:44:02.701720       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0307 22:44:02.702021       1 config.go:444] "Starting node config controller"
	I0307 22:44:02.702031       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0307 22:44:02.802717       1 shared_informer.go:262] Caches are synced for service config
	I0307 22:44:02.802750       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0307 22:44:02.803565       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [2e748d15bfb3] <==
	W0307 22:43:46.112203       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0307 22:43:46.112219       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0307 22:43:46.112264       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 22:43:46.112282       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0307 22:43:46.112309       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0307 22:43:46.112326       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0307 22:43:46.112394       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0307 22:43:46.112428       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 22:43:46.112459       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 22:43:46.112475       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0307 22:43:46.112515       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0307 22:43:46.112542       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0307 22:43:46.112559       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 22:43:46.112575       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0307 22:43:46.975451       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0307 22:43:46.975495       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0307 22:43:46.986169       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 22:43:46.986187       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 22:43:47.030418       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0307 22:43:47.030438       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0307 22:43:47.100208       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 22:43:47.100251       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0307 22:43:47.100292       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 22:43:47.100298       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0307 22:43:47.607207       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-03-07 22:38:53 UTC, ends at Thu 2024-03-07 22:48:09 UTC. --
	Mar 07 22:44:01 running-upgrade-974000 kubelet[12530]: E0307 22:44:01.708635   12530 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 07 22:44:01 running-upgrade-974000 kubelet[12530]: E0307 22:44:01.708657   12530 projected.go:192] Error preparing data for projected volume kube-api-access-sgqsx for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 07 22:44:01 running-upgrade-974000 kubelet[12530]: E0307 22:44:01.708685   12530 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/eb9dcb08-d1a0-42e5-82f9-318b6f917ceb-kube-api-access-sgqsx podName:eb9dcb08-d1a0-42e5-82f9-318b6f917ceb nodeName:}" failed. No retries permitted until 2024-03-07 22:44:02.708676227 +0000 UTC m=+14.011230520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sgqsx" (UniqueName: "kubernetes.io/projected/eb9dcb08-d1a0-42e5-82f9-318b6f917ceb-kube-api-access-sgqsx") pod "storage-provisioner" (UID: "eb9dcb08-d1a0-42e5-82f9-318b6f917ceb") : configmap "kube-root-ca.crt" not found
	Mar 07 22:44:01 running-upgrade-974000 kubelet[12530]: I0307 22:44:01.895386   12530 topology_manager.go:200] "Topology Admit Handler"
	Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: I0307 22:44:02.009885   12530 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwd9f\" (UniqueName: \"kubernetes.io/projected/10095a46-3f1b-471f-81b1-f9c644c7459a-kube-api-access-lwd9f\") pod \"kube-proxy-rvds2\" (UID: \"10095a46-3f1b-471f-81b1-f9c644c7459a\") " pod="kube-system/kube-proxy-rvds2"
	Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: I0307 22:44:02.009945   12530 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/10095a46-3f1b-471f-81b1-f9c644c7459a-kube-proxy\") pod \"kube-proxy-rvds2\" (UID: \"10095a46-3f1b-471f-81b1-f9c644c7459a\") " pod="kube-system/kube-proxy-rvds2"
	Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: I0307 22:44:02.009956   12530 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10095a46-3f1b-471f-81b1-f9c644c7459a-lib-modules\") pod \"kube-proxy-rvds2\" (UID: \"10095a46-3f1b-471f-81b1-f9c644c7459a\") " pod="kube-system/kube-proxy-rvds2"
	Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: I0307 22:44:02.009966   12530 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10095a46-3f1b-471f-81b1-f9c644c7459a-xtables-lock\") pod \"kube-proxy-rvds2\" (UID: \"10095a46-3f1b-471f-81b1-f9c644c7459a\") " pod="kube-system/kube-proxy-rvds2"
	Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: I0307 22:44:02.046421   12530 topology_manager.go:200] "Topology Admit Handler"
	Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: I0307 22:44:02.047385   12530 topology_manager.go:200] "Topology Admit Handler"
	Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: W0307 22:44:02.048631   12530 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: E0307 22:44:02.048684   12530 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-974000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-974000' and this object
	Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: I0307 22:44:02.110437   12530 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptn9m\" (UniqueName: \"kubernetes.io/projected/1e4d40ec-1d7f-4d49-8a9e-6ba7ca41a085-kube-api-access-ptn9m\") pod \"coredns-6d4b75cb6d-4f2qt\" (UID: \"1e4d40ec-1d7f-4d49-8a9e-6ba7ca41a085\") " pod="kube-system/coredns-6d4b75cb6d-4f2qt"
	Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: I0307 22:44:02.110461   12530 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4aac009f-512e-4d94-9e46-a1528fecf99f-config-volume\") pod \"coredns-6d4b75cb6d-lrqcd\" (UID: \"4aac009f-512e-4d94-9e46-a1528fecf99f\") " pod="kube-system/coredns-6d4b75cb6d-lrqcd"
	Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: I0307 22:44:02.110473   12530 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg47d\" (UniqueName: \"kubernetes.io/projected/4aac009f-512e-4d94-9e46-a1528fecf99f-kube-api-access-tg47d\") pod \"coredns-6d4b75cb6d-lrqcd\" (UID: \"4aac009f-512e-4d94-9e46-a1528fecf99f\") " pod="kube-system/coredns-6d4b75cb6d-lrqcd"
	Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: I0307 22:44:02.110486   12530 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e4d40ec-1d7f-4d49-8a9e-6ba7ca41a085-config-volume\") pod \"coredns-6d4b75cb6d-4f2qt\" (UID: \"1e4d40ec-1d7f-4d49-8a9e-6ba7ca41a085\") " pod="kube-system/coredns-6d4b75cb6d-4f2qt"
	Mar 07 22:44:02 running-upgrade-974000 kubelet[12530]: I0307 22:44:02.957233   12530 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e7f44785426954ceb4b50dabc858db457b6823ce304841e47ee404a005a6eb5c"
	Mar 07 22:44:03 running-upgrade-974000 kubelet[12530]: E0307 22:44:03.211540   12530 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Mar 07 22:44:03 running-upgrade-974000 kubelet[12530]: E0307 22:44:03.211598   12530 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4aac009f-512e-4d94-9e46-a1528fecf99f-config-volume podName:4aac009f-512e-4d94-9e46-a1528fecf99f nodeName:}" failed. No retries permitted until 2024-03-07 22:44:03.711586091 +0000 UTC m=+15.014140385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4aac009f-512e-4d94-9e46-a1528fecf99f-config-volume") pod "coredns-6d4b75cb6d-lrqcd" (UID: "4aac009f-512e-4d94-9e46-a1528fecf99f") : failed to sync configmap cache: timed out waiting for the condition
	Mar 07 22:44:03 running-upgrade-974000 kubelet[12530]: E0307 22:44:03.211548   12530 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Mar 07 22:44:03 running-upgrade-974000 kubelet[12530]: E0307 22:44:03.211650   12530 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1e4d40ec-1d7f-4d49-8a9e-6ba7ca41a085-config-volume podName:1e4d40ec-1d7f-4d49-8a9e-6ba7ca41a085 nodeName:}" failed. No retries permitted until 2024-03-07 22:44:03.711643882 +0000 UTC m=+15.014198176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1e4d40ec-1d7f-4d49-8a9e-6ba7ca41a085-config-volume") pod "coredns-6d4b75cb6d-4f2qt" (UID: "1e4d40ec-1d7f-4d49-8a9e-6ba7ca41a085") : failed to sync configmap cache: timed out waiting for the condition
	Mar 07 22:44:04 running-upgrade-974000 kubelet[12530]: I0307 22:44:04.064810   12530 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b9347267218e35a9db884a72047649ef4836d49e05ea0e521fb765c7f55a0368"
	Mar 07 22:44:04 running-upgrade-974000 kubelet[12530]: I0307 22:44:04.082145   12530 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="580c51f3c572e11bc9f3fafeb28268345db2e27da0bb2776f30d87d9a368ee3f"
	Mar 07 22:47:50 running-upgrade-974000 kubelet[12530]: I0307 22:47:50.448614   12530 scope.go:110] "RemoveContainer" containerID="cf3188fb5d7346f5b730ad7fb74f7febe779d149fc93eb7633a58d191d719954"
	Mar 07 22:47:50 running-upgrade-974000 kubelet[12530]: I0307 22:47:50.467761   12530 scope.go:110] "RemoveContainer" containerID="cb2252cd6914b682c2db2de11a9d3662ed9e562a9b86a1345899e47fd6af09fb"
	
	
	==> storage-provisioner [c1b57edc8d85] <==
	I0307 22:44:03.033770       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 22:44:03.037855       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 22:44:03.037874       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 22:44:03.041853       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 22:44:03.042046       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-974000_7fb17074-0e5f-4ebd-b10e-b450af093c05!
	I0307 22:44:03.042868       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cd6b3c8-dbc5-413a-9714-14954d217155", APIVersion:"v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-974000_7fb17074-0e5f-4ebd-b10e-b450af093c05 became leader
	I0307 22:44:03.143052       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-974000_7fb17074-0e5f-4ebd-b10e-b450af093c05!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-974000 -n running-upgrade-974000
E0307 14:48:17.449305    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-974000 -n running-upgrade-974000: exit status 2 (15.666024833s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-974000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-974000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-974000
--- FAIL: TestRunningBinaryUpgrade (629.28s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.23s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-697000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-697000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.774947833s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-697000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-697000" primary control-plane node in "kubernetes-upgrade-697000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-697000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:40:56.486742    4179 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:40:56.486854    4179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:40:56.486857    4179 out.go:304] Setting ErrFile to fd 2...
	I0307 14:40:56.486860    4179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:40:56.486983    4179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:40:56.488075    4179 out.go:298] Setting JSON to false
	I0307 14:40:56.504438    4179 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4213,"bootTime":1709847043,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:40:56.504527    4179 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:40:56.510761    4179 out.go:177] * [kubernetes-upgrade-697000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:40:56.523661    4179 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:40:56.518625    4179 notify.go:220] Checking for updates...
	I0307 14:40:56.531679    4179 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:40:56.534642    4179 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:40:56.537732    4179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:40:56.540665    4179 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:40:56.543679    4179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:40:56.546968    4179 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:40:56.547036    4179 config.go:182] Loaded profile config "running-upgrade-974000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:40:56.547078    4179 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:40:56.549553    4179 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:40:56.556641    4179 start.go:297] selected driver: qemu2
	I0307 14:40:56.556646    4179 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:40:56.556658    4179 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:40:56.559040    4179 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:40:56.560704    4179 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:40:56.563689    4179 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 14:40:56.563730    4179 cni.go:84] Creating CNI manager for ""
	I0307 14:40:56.563737    4179 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 14:40:56.563766    4179 start.go:340] cluster config:
	{Name:kubernetes-upgrade-697000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-697000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:40:56.568308    4179 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:40:56.575683    4179 out.go:177] * Starting "kubernetes-upgrade-697000" primary control-plane node in "kubernetes-upgrade-697000" cluster
	I0307 14:40:56.579639    4179 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 14:40:56.579655    4179 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 14:40:56.579667    4179 cache.go:56] Caching tarball of preloaded images
	I0307 14:40:56.579746    4179 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:40:56.579752    4179 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 14:40:56.579811    4179 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/kubernetes-upgrade-697000/config.json ...
	I0307 14:40:56.579822    4179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/kubernetes-upgrade-697000/config.json: {Name:mkbf8e8d4a25a40c42b0fca576ec749fa75e5d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:40:56.580051    4179 start.go:360] acquireMachinesLock for kubernetes-upgrade-697000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:40:56.580087    4179 start.go:364] duration metric: took 28.084µs to acquireMachinesLock for "kubernetes-upgrade-697000"
	I0307 14:40:56.580099    4179 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-697000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-697000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:40:56.580129    4179 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:40:56.588660    4179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:40:56.613400    4179 start.go:159] libmachine.API.Create for "kubernetes-upgrade-697000" (driver="qemu2")
	I0307 14:40:56.613430    4179 client.go:168] LocalClient.Create starting
	I0307 14:40:56.613496    4179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:40:56.613530    4179 main.go:141] libmachine: Decoding PEM data...
	I0307 14:40:56.613538    4179 main.go:141] libmachine: Parsing certificate...
	I0307 14:40:56.613580    4179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:40:56.613601    4179 main.go:141] libmachine: Decoding PEM data...
	I0307 14:40:56.613607    4179 main.go:141] libmachine: Parsing certificate...
	I0307 14:40:56.613961    4179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:40:56.755558    4179 main.go:141] libmachine: Creating SSH key...
	I0307 14:40:56.841512    4179 main.go:141] libmachine: Creating Disk image...
	I0307 14:40:56.841521    4179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:40:56.841729    4179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2
	I0307 14:40:56.854599    4179 main.go:141] libmachine: STDOUT: 
	I0307 14:40:56.854633    4179 main.go:141] libmachine: STDERR: 
	I0307 14:40:56.854686    4179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2 +20000M
	I0307 14:40:56.865881    4179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:40:56.865909    4179 main.go:141] libmachine: STDERR: 
	I0307 14:40:56.865926    4179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2
	I0307 14:40:56.865930    4179 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:40:56.865959    4179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4c:a4:f4:81:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2
	I0307 14:40:56.867748    4179 main.go:141] libmachine: STDOUT: 
	I0307 14:40:56.867768    4179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:40:56.867787    4179 client.go:171] duration metric: took 254.349583ms to LocalClient.Create
	I0307 14:40:58.870003    4179 start.go:128] duration metric: took 2.289839958s to createHost
	I0307 14:40:58.870092    4179 start.go:83] releasing machines lock for "kubernetes-upgrade-697000", held for 2.289990625s
	W0307 14:40:58.870209    4179 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:40:58.879892    4179 out.go:177] * Deleting "kubernetes-upgrade-697000" in qemu2 ...
	W0307 14:40:58.912939    4179 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:40:58.912978    4179 start.go:728] Will try again in 5 seconds ...
	I0307 14:41:03.915158    4179 start.go:360] acquireMachinesLock for kubernetes-upgrade-697000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:41:03.915451    4179 start.go:364] duration metric: took 222.792µs to acquireMachinesLock for "kubernetes-upgrade-697000"
	I0307 14:41:03.915491    4179 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-697000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-697000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:41:03.915619    4179 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:41:03.923902    4179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:41:03.957244    4179 start.go:159] libmachine.API.Create for "kubernetes-upgrade-697000" (driver="qemu2")
	I0307 14:41:03.957296    4179 client.go:168] LocalClient.Create starting
	I0307 14:41:03.957393    4179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:41:03.957446    4179 main.go:141] libmachine: Decoding PEM data...
	I0307 14:41:03.957458    4179 main.go:141] libmachine: Parsing certificate...
	I0307 14:41:03.957515    4179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:41:03.957550    4179 main.go:141] libmachine: Decoding PEM data...
	I0307 14:41:03.957563    4179 main.go:141] libmachine: Parsing certificate...
	I0307 14:41:03.958075    4179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:41:04.109282    4179 main.go:141] libmachine: Creating SSH key...
	I0307 14:41:04.160517    4179 main.go:141] libmachine: Creating Disk image...
	I0307 14:41:04.160523    4179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:41:04.160735    4179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2
	I0307 14:41:04.173775    4179 main.go:141] libmachine: STDOUT: 
	I0307 14:41:04.173799    4179 main.go:141] libmachine: STDERR: 
	I0307 14:41:04.173863    4179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2 +20000M
	I0307 14:41:04.184732    4179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:41:04.184848    4179 main.go:141] libmachine: STDERR: 
	I0307 14:41:04.184859    4179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2
	I0307 14:41:04.184863    4179 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:41:04.184892    4179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:25:05:a2:00:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2
	I0307 14:41:04.186636    4179 main.go:141] libmachine: STDOUT: 
	I0307 14:41:04.186657    4179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:41:04.186676    4179 client.go:171] duration metric: took 229.374625ms to LocalClient.Create
	I0307 14:41:06.188877    4179 start.go:128] duration metric: took 2.273228167s to createHost
	I0307 14:41:06.188965    4179 start.go:83] releasing machines lock for "kubernetes-upgrade-697000", held for 2.273488708s
	W0307 14:41:06.189319    4179 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-697000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-697000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:41:06.198952    4179 out.go:177] 
	W0307 14:41:06.206027    4179 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:41:06.206152    4179 out.go:239] * 
	* 
	W0307 14:41:06.208960    4179 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:41:06.218968    4179 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-697000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-697000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-697000: (2.0438165s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-697000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-697000 status --format={{.Host}}: exit status 7 (51.902584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-697000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-697000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.180605s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-697000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-697000" primary control-plane node in "kubernetes-upgrade-697000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-697000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-697000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:41:08.360254    4209 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:41:08.360509    4209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:41:08.360515    4209 out.go:304] Setting ErrFile to fd 2...
	I0307 14:41:08.360518    4209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:41:08.360703    4209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:41:08.362031    4209 out.go:298] Setting JSON to false
	I0307 14:41:08.379105    4209 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4225,"bootTime":1709847043,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:41:08.379172    4209 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:41:08.382971    4209 out.go:177] * [kubernetes-upgrade-697000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:41:08.389841    4209 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:41:08.393851    4209 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:41:08.389886    4209 notify.go:220] Checking for updates...
	I0307 14:41:08.397829    4209 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:41:08.400797    4209 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:41:08.403854    4209 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:41:08.407805    4209 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:41:08.412211    4209 config.go:182] Loaded profile config "kubernetes-upgrade-697000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0307 14:41:08.412471    4209 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:41:08.416619    4209 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 14:41:08.423827    4209 start.go:297] selected driver: qemu2
	I0307 14:41:08.423833    4209 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-697000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-697000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:41:08.423890    4209 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:41:08.426300    4209 cni.go:84] Creating CNI manager for ""
	I0307 14:41:08.426320    4209 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:41:08.426345    4209 start.go:340] cluster config:
	{Name:kubernetes-upgrade-697000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-697000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:41:08.430614    4209 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:41:08.437800    4209 out.go:177] * Starting "kubernetes-upgrade-697000" primary control-plane node in "kubernetes-upgrade-697000" cluster
	I0307 14:41:08.441832    4209 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 14:41:08.441866    4209 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 14:41:08.441881    4209 cache.go:56] Caching tarball of preloaded images
	I0307 14:41:08.441997    4209 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:41:08.442004    4209 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0307 14:41:08.442074    4209 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/kubernetes-upgrade-697000/config.json ...
	I0307 14:41:08.442453    4209 start.go:360] acquireMachinesLock for kubernetes-upgrade-697000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:41:08.442484    4209 start.go:364] duration metric: took 22.5µs to acquireMachinesLock for "kubernetes-upgrade-697000"
	I0307 14:41:08.442492    4209 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:41:08.442499    4209 fix.go:54] fixHost starting: 
	I0307 14:41:08.442609    4209 fix.go:112] recreateIfNeeded on kubernetes-upgrade-697000: state=Stopped err=<nil>
	W0307 14:41:08.442621    4209 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:41:08.450810    4209 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-697000" ...
	I0307 14:41:08.454746    4209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:25:05:a2:00:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2
	I0307 14:41:08.456846    4209 main.go:141] libmachine: STDOUT: 
	I0307 14:41:08.456865    4209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:41:08.456894    4209 fix.go:56] duration metric: took 14.393042ms for fixHost
	I0307 14:41:08.456899    4209 start.go:83] releasing machines lock for "kubernetes-upgrade-697000", held for 14.411125ms
	W0307 14:41:08.456905    4209 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:41:08.456941    4209 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:41:08.456945    4209 start.go:728] Will try again in 5 seconds ...
	I0307 14:41:13.459108    4209 start.go:360] acquireMachinesLock for kubernetes-upgrade-697000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:41:13.459343    4209 start.go:364] duration metric: took 175.916µs to acquireMachinesLock for "kubernetes-upgrade-697000"
	I0307 14:41:13.459440    4209 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:41:13.459449    4209 fix.go:54] fixHost starting: 
	I0307 14:41:13.459714    4209 fix.go:112] recreateIfNeeded on kubernetes-upgrade-697000: state=Stopped err=<nil>
	W0307 14:41:13.459723    4209 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:41:13.467962    4209 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-697000" ...
	I0307 14:41:13.472021    4209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:25:05:a2:00:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubernetes-upgrade-697000/disk.qcow2
	I0307 14:41:13.476041    4209 main.go:141] libmachine: STDOUT: 
	I0307 14:41:13.476068    4209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:41:13.476107    4209 fix.go:56] duration metric: took 16.6585ms for fixHost
	I0307 14:41:13.476115    4209 start.go:83] releasing machines lock for "kubernetes-upgrade-697000", held for 16.741667ms
	W0307 14:41:13.476178    4209 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-697000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-697000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:41:13.484942    4209 out.go:177] 
	W0307 14:41:13.488961    4209 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:41:13.488970    4209 out.go:239] * 
	* 
	W0307 14:41:13.489810    4209 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:41:13.500900    4209 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-697000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-697000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-697000 version --output=json: exit status 1 (41.397291ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-697000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-07 14:41:13.553177 -0800 PST m=+3286.048237251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-697000 -n kubernetes-upgrade-697000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-697000 -n kubernetes-upgrade-697000: exit status 7 (32.163209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-697000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-697000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-697000
--- FAIL: TestKubernetesUpgrade (17.23s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.94s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18320
- KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current199820984/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.94s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.42s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18320
- KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1546060977/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (579.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3944250776 start -p stopped-upgrade-491000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3944250776 start -p stopped-upgrade-491000 --memory=2200 --vm-driver=qemu2 : (45.454404917s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3944250776 -p stopped-upgrade-491000 stop
E0307 14:42:13.938064    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3944250776 -p stopped-upgrade-491000 stop: (12.106418042s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-491000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0307 14:43:17.448567    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:46:57.011270    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 14:47:13.937480    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-491000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.567574375s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-491000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-491000" primary control-plane node in "stopped-upgrade-491000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-491000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:42:16.306343    4258 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:42:16.306533    4258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:42:16.306537    4258 out.go:304] Setting ErrFile to fd 2...
	I0307 14:42:16.306539    4258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:42:16.306686    4258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:42:16.307795    4258 out.go:298] Setting JSON to false
	I0307 14:42:16.326572    4258 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4293,"bootTime":1709847043,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:42:16.326638    4258 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:42:16.330823    4258 out.go:177] * [stopped-upgrade-491000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:42:16.343737    4258 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:42:16.338794    4258 notify.go:220] Checking for updates...
	I0307 14:42:16.349762    4258 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:42:16.352802    4258 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:42:16.355778    4258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:42:16.362772    4258 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:42:16.369792    4258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:42:16.373051    4258 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:42:16.376751    4258 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 14:42:16.379763    4258 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:42:16.383730    4258 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 14:42:16.389717    4258 start.go:297] selected driver: qemu2
	I0307 14:42:16.389723    4258 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-491000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 14:42:16.389784    4258 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:42:16.392618    4258 cni.go:84] Creating CNI manager for ""
	I0307 14:42:16.392638    4258 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:42:16.392669    4258 start.go:340] cluster config:
	{Name:stopped-upgrade-491000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 14:42:16.392729    4258 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:42:16.400786    4258 out.go:177] * Starting "stopped-upgrade-491000" primary control-plane node in "stopped-upgrade-491000" cluster
	I0307 14:42:16.404679    4258 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 14:42:16.404702    4258 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0307 14:42:16.404711    4258 cache.go:56] Caching tarball of preloaded images
	I0307 14:42:16.404771    4258 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:42:16.404778    4258 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0307 14:42:16.404851    4258 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/config.json ...
	I0307 14:42:16.405430    4258 start.go:360] acquireMachinesLock for stopped-upgrade-491000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:42:16.405461    4258 start.go:364] duration metric: took 23.375µs to acquireMachinesLock for "stopped-upgrade-491000"
	I0307 14:42:16.405470    4258 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:42:16.405475    4258 fix.go:54] fixHost starting: 
	I0307 14:42:16.405609    4258 fix.go:112] recreateIfNeeded on stopped-upgrade-491000: state=Stopped err=<nil>
	W0307 14:42:16.405619    4258 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:42:16.409772    4258 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-491000" ...
	I0307 14:42:16.417800    4258 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50471-:22,hostfwd=tcp::50472-:2376,hostname=stopped-upgrade-491000 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/disk.qcow2
	I0307 14:42:16.467283    4258 main.go:141] libmachine: STDOUT: 
	I0307 14:42:16.467320    4258 main.go:141] libmachine: STDERR: 
	I0307 14:42:16.467326    4258 main.go:141] libmachine: Waiting for VM to start (ssh -p 50471 docker@127.0.0.1)...
	I0307 14:42:35.918220    4258 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/config.json ...
	I0307 14:42:35.918706    4258 machine.go:94] provisionDockerMachine start ...
	I0307 14:42:35.918803    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:35.919066    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:35.919075    4258 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 14:42:35.989267    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 14:42:35.989284    4258 buildroot.go:166] provisioning hostname "stopped-upgrade-491000"
	I0307 14:42:35.989360    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:35.989492    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:35.989500    4258 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-491000 && echo "stopped-upgrade-491000" | sudo tee /etc/hostname
	I0307 14:42:36.054504    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-491000
	
	I0307 14:42:36.054562    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:36.054685    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:36.054694    4258 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-491000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-491000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-491000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 14:42:36.116493    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 14:42:36.116509    4258 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18320-1153/.minikube CaCertPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18320-1153/.minikube}
	I0307 14:42:36.116524    4258 buildroot.go:174] setting up certificates
	I0307 14:42:36.116530    4258 provision.go:84] configureAuth start
	I0307 14:42:36.116534    4258 provision.go:143] copyHostCerts
	I0307 14:42:36.116623    4258 exec_runner.go:144] found /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.pem, removing ...
	I0307 14:42:36.116634    4258 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.pem
	I0307 14:42:36.116765    4258 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.pem (1078 bytes)
	I0307 14:42:36.116982    4258 exec_runner.go:144] found /Users/jenkins/minikube-integration/18320-1153/.minikube/cert.pem, removing ...
	I0307 14:42:36.116987    4258 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18320-1153/.minikube/cert.pem
	I0307 14:42:36.117044    4258 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18320-1153/.minikube/cert.pem (1123 bytes)
	I0307 14:42:36.117183    4258 exec_runner.go:144] found /Users/jenkins/minikube-integration/18320-1153/.minikube/key.pem, removing ...
	I0307 14:42:36.117188    4258 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18320-1153/.minikube/key.pem
	I0307 14:42:36.117243    4258 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18320-1153/.minikube/key.pem (1679 bytes)
	I0307 14:42:36.117375    4258 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-491000 san=[127.0.0.1 localhost minikube stopped-upgrade-491000]
	I0307 14:42:36.175574    4258 provision.go:177] copyRemoteCerts
	I0307 14:42:36.175613    4258 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 14:42:36.175621    4258 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/id_rsa Username:docker}
	I0307 14:42:36.203567    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 14:42:36.210657    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0307 14:42:36.217426    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 14:42:36.224210    4258 provision.go:87] duration metric: took 107.668792ms to configureAuth
	I0307 14:42:36.224219    4258 buildroot.go:189] setting minikube options for container-runtime
	I0307 14:42:36.224325    4258 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:42:36.224365    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:36.224455    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:36.224461    4258 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 14:42:36.279911    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 14:42:36.279921    4258 buildroot.go:70] root file system type: tmpfs
	I0307 14:42:36.279971    4258 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 14:42:36.280013    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:36.280115    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:36.280147    4258 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 14:42:36.336683    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 14:42:36.336734    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:36.336842    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:36.336852    4258 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 14:42:36.668252    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 14:42:36.668271    4258 machine.go:97] duration metric: took 749.554083ms to provisionDockerMachine
	I0307 14:42:36.668278    4258 start.go:293] postStartSetup for "stopped-upgrade-491000" (driver="qemu2")
	I0307 14:42:36.668285    4258 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 14:42:36.668353    4258 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 14:42:36.668363    4258 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/id_rsa Username:docker}
	I0307 14:42:36.699681    4258 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 14:42:36.701177    4258 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 14:42:36.701184    4258 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18320-1153/.minikube/addons for local assets ...
	I0307 14:42:36.701255    4258 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18320-1153/.minikube/files for local assets ...
	I0307 14:42:36.701376    4258 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18320-1153/.minikube/files/etc/ssl/certs/15962.pem -> 15962.pem in /etc/ssl/certs
	I0307 14:42:36.701503    4258 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 14:42:36.704231    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/files/etc/ssl/certs/15962.pem --> /etc/ssl/certs/15962.pem (1708 bytes)
	I0307 14:42:36.711499    4258 start.go:296] duration metric: took 43.214875ms for postStartSetup
	I0307 14:42:36.711511    4258 fix.go:56] duration metric: took 20.305998208s for fixHost
	I0307 14:42:36.711543    4258 main.go:141] libmachine: Using SSH client type: native
	I0307 14:42:36.711645    4258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102985a30] 0x102988290 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0307 14:42:36.711650    4258 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0307 14:42:36.767733    4258 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709851357.210679629
	
	I0307 14:42:36.767740    4258 fix.go:216] guest clock: 1709851357.210679629
	I0307 14:42:36.767744    4258 fix.go:229] Guest: 2024-03-07 14:42:37.210679629 -0800 PST Remote: 2024-03-07 14:42:36.711513 -0800 PST m=+20.438249001 (delta=499.166629ms)
	I0307 14:42:36.767755    4258 fix.go:200] guest clock delta is within tolerance: 499.166629ms
	I0307 14:42:36.767757    4258 start.go:83] releasing machines lock for "stopped-upgrade-491000", held for 20.362252833s
	I0307 14:42:36.767823    4258 ssh_runner.go:195] Run: cat /version.json
	I0307 14:42:36.767828    4258 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 14:42:36.767832    4258 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/id_rsa Username:docker}
	I0307 14:42:36.767844    4258 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/id_rsa Username:docker}
	W0307 14:42:36.768388    4258 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50471: connect: connection refused
	I0307 14:42:36.768413    4258 retry.go:31] will retry after 315.077034ms: dial tcp [::1]:50471: connect: connection refused
	W0307 14:42:36.796587    4258 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0307 14:42:36.796639    4258 ssh_runner.go:195] Run: systemctl --version
	I0307 14:42:36.798651    4258 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 14:42:36.800494    4258 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 14:42:36.800522    4258 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0307 14:42:36.803601    4258 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0307 14:42:36.808227    4258 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 14:42:36.808234    4258 start.go:494] detecting cgroup driver to use...
	I0307 14:42:36.808308    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 14:42:36.815562    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0307 14:42:36.818978    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 14:42:36.821955    4258 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 14:42:36.821996    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 14:42:36.825157    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 14:42:36.828110    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 14:42:36.830962    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 14:42:36.833545    4258 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 14:42:36.836521    4258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 14:42:36.840002    4258 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 14:42:36.842659    4258 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 14:42:36.845088    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:42:36.911974    4258 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 14:42:36.918381    4258 start.go:494] detecting cgroup driver to use...
	I0307 14:42:36.918452    4258 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 14:42:36.925337    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 14:42:36.931254    4258 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 14:42:36.938149    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 14:42:36.942969    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 14:42:36.947604    4258 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 14:42:37.009368    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 14:42:37.014748    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 14:42:37.020647    4258 ssh_runner.go:195] Run: which cri-dockerd
	I0307 14:42:37.022151    4258 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 14:42:37.024933    4258 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 14:42:37.029919    4258 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 14:42:37.095728    4258 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 14:42:37.162960    4258 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 14:42:37.163031    4258 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 14:42:37.168954    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:42:37.236970    4258 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 14:42:37.352342    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 14:42:37.357112    4258 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 14:42:37.362191    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 14:42:37.366609    4258 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 14:42:37.432079    4258 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 14:42:37.494141    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:42:37.563037    4258 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 14:42:37.568838    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 14:42:37.573664    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:42:37.638784    4258 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 14:42:37.677752    4258 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 14:42:37.677829    4258 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 14:42:37.680594    4258 start.go:562] Will wait 60s for crictl version
	I0307 14:42:37.680662    4258 ssh_runner.go:195] Run: which crictl
	I0307 14:42:37.681994    4258 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 14:42:37.696823    4258 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0307 14:42:37.696900    4258 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 14:42:37.713823    4258 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 14:42:37.735412    4258 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0307 14:42:37.735545    4258 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0307 14:42:37.737026    4258 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 14:42:37.740649    4258 kubeadm.go:877] updating cluster {Name:stopped-upgrade-491000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0307 14:42:37.740699    4258 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 14:42:37.740742    4258 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 14:42:37.751515    4258 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 14:42:37.751524    4258 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 14:42:37.751576    4258 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 14:42:37.755211    4258 ssh_runner.go:195] Run: which lz4
	I0307 14:42:37.756428    4258 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0307 14:42:37.757773    4258 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 14:42:37.757783    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0307 14:42:38.544464    4258 docker.go:649] duration metric: took 788.058833ms to copy over tarball
	I0307 14:42:38.544526    4258 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 14:42:39.718690    4258 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.174143833s)
	I0307 14:42:39.718705    4258 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 14:42:39.734468    4258 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 14:42:39.737494    4258 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0307 14:42:39.742441    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:42:39.807028    4258 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 14:42:41.499685    4258 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.692637167s)
	I0307 14:42:41.499768    4258 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 14:42:41.513790    4258 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 14:42:41.513800    4258 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 14:42:41.513805    4258 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0307 14:42:41.520127    4258 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:42:41.520194    4258 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:42:41.520240    4258 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:42:41.520293    4258 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:42:41.520323    4258 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:42:41.520371    4258 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0307 14:42:41.520873    4258 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0307 14:42:41.521188    4258 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:42:41.529974    4258 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:42:41.530066    4258 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0307 14:42:41.530119    4258 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:42:41.530137    4258 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:42:41.530361    4258 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:42:41.530369    4258 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0307 14:42:41.530281    4258 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:42:41.530282    4258 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W0307 14:42:43.474838    4258 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0307 14:42:43.475548    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:42:43.514944    4258 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0307 14:42:43.514993    4258 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:42:43.515097    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0307 14:42:43.536069    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0307 14:42:43.536253    4258 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0307 14:42:43.538904    4258 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0307 14:42:43.538929    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0307 14:42:43.575901    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:42:43.579974    4258 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0307 14:42:43.579985    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0307 14:42:43.593423    4258 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0307 14:42:43.593446    4258 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:42:43.593505    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0307 14:42:43.615596    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0307 14:42:43.621699    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0307 14:42:43.633048    4258 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0307 14:42:43.633093    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0307 14:42:43.633112    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:42:43.633180    4258 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0307 14:42:43.633193    4258 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0307 14:42:43.633210    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0307 14:42:43.643236    4258 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0307 14:42:43.643261    4258 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0307 14:42:43.643320    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0307 14:42:43.658528    4258 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0307 14:42:43.658559    4258 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:42:43.658564    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0307 14:42:43.658618    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0307 14:42:43.658670    4258 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0307 14:42:43.662374    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:42:43.669274    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:42:43.670559    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0307 14:42:43.684364    4258 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0307 14:42:43.684382    4258 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0307 14:42:43.684399    4258 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:42:43.684395    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0307 14:42:43.684442    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 14:42:43.684540    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0307 14:42:43.689318    4258 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0307 14:42:43.689336    4258 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:42:43.689387    4258 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0307 14:42:43.696320    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0307 14:42:43.703593    4258 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0307 14:42:43.703607    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0307 14:42:43.703948    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0307 14:42:43.730986    4258 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0307 14:42:44.187716    4258 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0307 14:42:44.188286    4258 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:42:44.227903    4258 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0307 14:42:44.227940    4258 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:42:44.228049    4258 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:42:44.256187    4258 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0307 14:42:44.256364    4258 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0307 14:42:44.258500    4258 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0307 14:42:44.258522    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0307 14:42:44.291532    4258 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0307 14:42:44.291552    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0307 14:42:44.530708    4258 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0307 14:42:44.530749    4258 cache_images.go:92] duration metric: took 3.016928208s to LoadCachedImages
	W0307 14:42:44.530787    4258 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0307 14:42:44.530792    4258 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0307 14:42:44.530838    4258 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-491000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 14:42:44.530906    4258 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 14:42:44.549604    4258 cni.go:84] Creating CNI manager for ""
	I0307 14:42:44.549615    4258 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:42:44.549621    4258 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 14:42:44.549630    4258 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-491000 NodeName:stopped-upgrade-491000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 14:42:44.549702    4258 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-491000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 14:42:44.549756    4258 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0307 14:42:44.552661    4258 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 14:42:44.552688    4258 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 14:42:44.555725    4258 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0307 14:42:44.560808    4258 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 14:42:44.565715    4258 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0307 14:42:44.570610    4258 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0307 14:42:44.571850    4258 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 14:42:44.575809    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:42:44.643497    4258 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 14:42:44.650164    4258 certs.go:68] Setting up /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000 for IP: 10.0.2.15
	I0307 14:42:44.650172    4258 certs.go:194] generating shared ca certs ...
	I0307 14:42:44.650188    4258 certs.go:226] acquiring lock for ca certs: {Name:mkd070f5c39ed755a49409426e8106f4ed85635d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:42:44.650343    4258 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.key
	I0307 14:42:44.650394    4258 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.key
	I0307 14:42:44.650404    4258 certs.go:256] generating profile certs ...
	I0307 14:42:44.650485    4258 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/client.key
	I0307 14:42:44.650502    4258 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.key.e7c187b8
	I0307 14:42:44.650512    4258 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.crt.e7c187b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0307 14:42:44.740661    4258 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.crt.e7c187b8 ...
	I0307 14:42:44.740676    4258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.crt.e7c187b8: {Name:mkb1ccdb7c8c63a52370c7d14717a816578c4d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:42:44.740988    4258 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.key.e7c187b8 ...
	I0307 14:42:44.740993    4258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.key.e7c187b8: {Name:mk4abdd846f6791dd793d25a1f785075eadbe389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:42:44.741131    4258 certs.go:381] copying /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.crt.e7c187b8 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.crt
	I0307 14:42:44.741925    4258 certs.go:385] copying /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.key.e7c187b8 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.key
	I0307 14:42:44.742118    4258 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/proxy-client.key
	I0307 14:42:44.742251    4258 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/1596.pem (1338 bytes)
	W0307 14:42:44.742280    4258 certs.go:480] ignoring /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/1596_empty.pem, impossibly tiny 0 bytes
	I0307 14:42:44.742286    4258 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 14:42:44.742304    4258 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem (1078 bytes)
	I0307 14:42:44.742319    4258 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem (1123 bytes)
	I0307 14:42:44.742335    4258 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/key.pem (1679 bytes)
	I0307 14:42:44.742373    4258 certs.go:484] found cert: /Users/jenkins/minikube-integration/18320-1153/.minikube/files/etc/ssl/certs/15962.pem (1708 bytes)
	I0307 14:42:44.742678    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 14:42:44.749570    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 14:42:44.756528    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 14:42:44.763906    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 14:42:44.771350    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0307 14:42:44.778502    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 14:42:44.785344    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 14:42:44.792157    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 14:42:44.799717    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 14:42:44.806672    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/1596.pem --> /usr/share/ca-certificates/1596.pem (1338 bytes)
	I0307 14:42:44.813364    4258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18320-1153/.minikube/files/etc/ssl/certs/15962.pem --> /usr/share/ca-certificates/15962.pem (1708 bytes)
	I0307 14:42:44.820026    4258 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 14:42:44.825255    4258 ssh_runner.go:195] Run: openssl version
	I0307 14:42:44.827110    4258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 14:42:44.830018    4258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 14:42:44.831373    4258 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I0307 14:42:44.831396    4258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 14:42:44.833280    4258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 14:42:44.836475    4258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1596.pem && ln -fs /usr/share/ca-certificates/1596.pem /etc/ssl/certs/1596.pem"
	I0307 14:42:44.839807    4258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1596.pem
	I0307 14:42:44.841233    4258 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 21:56 /usr/share/ca-certificates/1596.pem
	I0307 14:42:44.841255    4258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1596.pem
	I0307 14:42:44.843006    4258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1596.pem /etc/ssl/certs/51391683.0"
	I0307 14:42:44.845824    4258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15962.pem && ln -fs /usr/share/ca-certificates/15962.pem /etc/ssl/certs/15962.pem"
	I0307 14:42:44.848839    4258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15962.pem
	I0307 14:42:44.850337    4258 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 21:56 /usr/share/ca-certificates/15962.pem
	I0307 14:42:44.850357    4258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15962.pem
	I0307 14:42:44.852137    4258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15962.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 14:42:44.855691    4258 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 14:42:44.857126    4258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 14:42:44.859901    4258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 14:42:44.861764    4258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 14:42:44.863730    4258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 14:42:44.865736    4258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 14:42:44.867486    4258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 14:42:44.869315    4258 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-491000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 14:42:44.869381    4258 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 14:42:44.879475    4258 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 14:42:44.882929    4258 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 14:42:44.882935    4258 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 14:42:44.882938    4258 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 14:42:44.882966    4258 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 14:42:44.885887    4258 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 14:42:44.886193    4258 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-491000" does not appear in /Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:42:44.886291    4258 kubeconfig.go:62] /Users/jenkins/minikube-integration/18320-1153/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-491000" cluster setting kubeconfig missing "stopped-upgrade-491000" context setting]
	I0307 14:42:44.886542    4258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/kubeconfig: {Name:mk8faa4add6483df163f8e33c2463984a39a544c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:42:44.886976    4258 kapi.go:59] client config for stopped-upgrade-491000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/client.key", CAFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c7b6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 14:42:44.887294    4258 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 14:42:44.890495    4258 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-491000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0307 14:42:44.890501    4258 kubeadm.go:1153] stopping kube-system containers ...
	I0307 14:42:44.890552    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 14:42:44.901324    4258 docker.go:483] Stopping containers: [e4e19e63a4f0 dd38b5bb91df f3df7e225915 2654012828a3 b8f8162b3de6 9b1325ed02f8 be0f33ea35aa 6e9d4e2d34c9]
	I0307 14:42:44.901388    4258 ssh_runner.go:195] Run: docker stop e4e19e63a4f0 dd38b5bb91df f3df7e225915 2654012828a3 b8f8162b3de6 9b1325ed02f8 be0f33ea35aa 6e9d4e2d34c9
	I0307 14:42:44.911973    4258 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 14:42:44.917995    4258 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 14:42:44.920618    4258 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 14:42:44.920623    4258 kubeadm.go:156] found existing configuration files:
	
	I0307 14:42:44.920650    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0307 14:42:44.923243    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 14:42:44.923266    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 14:42:44.925730    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0307 14:42:44.928395    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 14:42:44.928427    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 14:42:44.931679    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0307 14:42:44.934565    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 14:42:44.934591    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 14:42:44.937196    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0307 14:42:44.940188    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 14:42:44.940215    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 14:42:44.943349    4258 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 14:42:44.946174    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:42:44.971521    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:42:45.377027    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:42:45.494526    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:42:45.516427    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 14:42:45.540272    4258 api_server.go:52] waiting for apiserver process to appear ...
	I0307 14:42:45.540363    4258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 14:42:46.042392    4258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 14:42:46.542415    4258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 14:42:46.546892    4258 api_server.go:72] duration metric: took 1.006617708s to wait for apiserver process to appear ...
	I0307 14:42:46.546902    4258 api_server.go:88] waiting for apiserver healthz status ...
	I0307 14:42:46.546911    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:51.549021    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:51.549068    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:42:56.549393    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:42:56.549456    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:01.550039    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:01.550105    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:06.550907    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:06.550930    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:11.551752    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:11.551844    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:16.553277    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:16.553343    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:21.554800    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:21.554851    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:26.555694    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:26.555820    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:31.557834    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:31.557884    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:36.560193    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:36.560219    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:41.560991    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:41.561045    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:46.563347    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:46.563514    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:46.582155    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:43:46.582244    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:46.597047    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:43:46.597139    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:46.609497    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:43:46.609569    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:46.619760    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:43:46.619838    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:46.630100    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:43:46.630169    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:46.640447    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:43:46.640541    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:46.650887    4258 logs.go:276] 0 containers: []
	W0307 14:43:46.650898    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:46.650958    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:46.661696    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:43:46.661713    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:43:46.661718    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:43:46.679160    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:46.679170    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:46.704340    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:43:46.704349    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:46.715997    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:46.716006    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:46.720371    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:43:46.720381    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:43:46.734472    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:43:46.734483    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:43:46.745793    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:43:46.745804    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:43:46.761124    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:43:46.761134    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:43:46.775029    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:46.775040    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:46.884593    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:43:46.884605    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:43:46.929101    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:43:46.929115    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:43:46.942952    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:43:46.942964    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:43:46.954842    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:46.954855    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:46.995098    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:43:46.995114    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:43:47.011628    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:43:47.011640    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:43:47.027902    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:43:47.027915    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:43:47.041821    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:43:47.041834    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:43:49.556080    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:43:54.557211    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:43:54.557407    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:43:54.582216    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:43:54.582329    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:43:54.598787    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:43:54.598857    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:43:54.612209    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:43:54.612277    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:43:54.623516    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:43:54.623591    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:43:54.634153    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:43:54.634222    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:43:54.647854    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:43:54.647931    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:43:54.657973    4258 logs.go:276] 0 containers: []
	W0307 14:43:54.657986    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:43:54.658045    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:43:54.668472    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:43:54.668490    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:43:54.668502    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:43:54.706970    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:43:54.706979    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:43:54.721701    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:43:54.721716    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:43:54.733343    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:43:54.733357    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:43:54.745363    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:43:54.745373    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:43:54.763510    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:43:54.763521    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:43:54.778465    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:43:54.778480    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:43:54.797305    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:43:54.797317    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:43:54.808811    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:43:54.808822    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:43:54.824456    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:43:54.824473    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:43:54.836845    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:43:54.836856    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:43:54.849586    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:43:54.849598    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:43:54.862106    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:43:54.862117    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:43:54.866744    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:43:54.866750    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:43:54.905040    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:43:54.905054    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:43:54.929044    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:43:54.929054    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:43:54.966954    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:43:54.966967    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:43:57.487378    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:02.489310    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:02.489421    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:02.504506    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:02.504591    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:02.517308    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:02.517383    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:02.529183    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:02.529259    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:02.540724    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:02.540800    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:02.552721    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:02.552795    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:02.570547    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:02.570612    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:02.582045    4258 logs.go:276] 0 containers: []
	W0307 14:44:02.582061    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:02.582140    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:02.594430    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:02.594452    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:02.594458    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:02.607180    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:02.607199    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:02.647738    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:02.647763    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:02.661818    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:02.661828    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:02.674035    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:02.674047    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:02.688442    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:02.688463    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:02.704002    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:02.704020    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:02.708361    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:02.708370    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:02.750336    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:02.750349    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:02.764816    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:02.764828    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:02.777293    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:02.777304    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:02.802988    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:02.803006    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:02.842910    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:02.842924    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:02.857542    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:02.857557    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:02.869429    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:02.869443    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:02.885590    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:02.885604    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:02.905153    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:02.905167    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:05.420066    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:10.422487    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:10.422694    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:10.435352    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:10.435432    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:10.445953    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:10.446021    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:10.455984    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:10.456054    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:10.466401    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:10.466472    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:10.477098    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:10.477169    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:10.488083    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:10.488155    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:10.498108    4258 logs.go:276] 0 containers: []
	W0307 14:44:10.498119    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:10.498193    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:10.511434    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:10.511452    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:10.511458    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:10.548267    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:10.548280    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:10.552217    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:10.552225    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:10.591777    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:10.591793    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:10.604352    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:10.604367    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:10.622821    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:10.622834    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:10.635376    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:10.635386    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:10.651438    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:10.651448    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:10.674939    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:10.674946    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:10.688553    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:10.688563    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:10.701463    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:10.701473    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:10.713378    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:10.713389    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:10.748215    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:10.748226    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:10.787319    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:10.787331    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:10.808110    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:10.808121    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:10.822335    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:10.822346    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:10.833549    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:10.833561    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:13.352564    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:18.354911    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:18.355169    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:18.378935    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:18.379032    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:18.392458    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:18.392533    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:18.403857    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:18.403939    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:18.413968    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:18.414042    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:18.424532    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:18.424605    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:18.442315    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:18.442384    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:18.452635    4258 logs.go:276] 0 containers: []
	W0307 14:44:18.452646    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:18.452706    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:18.463384    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:18.463402    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:18.463418    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:18.479295    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:18.479307    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:18.493019    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:18.493029    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:18.507221    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:18.507234    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:18.518566    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:18.518576    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:18.529626    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:18.529637    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:18.540911    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:18.540922    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:18.564787    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:18.564798    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:18.608549    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:18.608559    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:18.619742    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:18.619754    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:18.634890    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:18.634904    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:18.652382    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:18.652394    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:18.693181    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:18.693193    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:18.707584    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:18.707595    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:18.719942    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:18.719954    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:18.759677    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:18.759701    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:18.764408    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:18.764419    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:21.278555    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:26.280849    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:26.281000    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:26.295942    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:26.296031    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:26.308145    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:26.310008    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:26.320529    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:26.320598    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:26.331049    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:26.331124    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:26.341800    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:26.341865    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:26.352348    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:26.352411    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:26.362732    4258 logs.go:276] 0 containers: []
	W0307 14:44:26.362740    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:26.362789    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:26.372993    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:26.373008    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:26.373014    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:26.409828    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:26.409838    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:26.413988    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:26.413995    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:26.428461    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:26.428471    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:26.445555    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:26.445568    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:26.456788    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:26.456799    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:26.481611    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:26.481621    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:26.519514    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:26.519526    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:26.533996    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:26.534005    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:26.545420    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:26.545436    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:26.561405    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:26.561415    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:26.572381    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:26.572394    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:26.584994    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:26.585008    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:26.620792    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:26.620802    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:26.635476    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:26.635489    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:26.648083    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:26.648093    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:26.659588    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:26.659599    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:29.172809    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:34.174992    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:34.175121    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:34.188574    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:34.188651    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:34.199287    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:34.199359    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:34.213995    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:34.214061    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:34.224736    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:34.224824    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:34.235110    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:34.235173    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:34.246808    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:34.246874    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:34.257399    4258 logs.go:276] 0 containers: []
	W0307 14:44:34.257410    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:34.257462    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:34.268121    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:34.268138    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:34.268143    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:34.283093    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:34.283104    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:34.294942    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:34.294952    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:34.312400    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:34.312411    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:34.325171    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:34.325180    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:34.336408    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:34.336418    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:34.371155    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:34.371167    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:34.383161    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:34.383171    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:34.394757    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:34.394769    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:34.399113    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:34.399119    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:34.437002    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:34.437015    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:34.451109    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:34.451123    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:34.491094    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:34.491108    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:34.505379    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:34.505390    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:34.516788    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:34.516800    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:34.530956    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:34.530965    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:34.542462    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:34.542472    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:37.067675    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:42.070038    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:42.070378    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:42.100282    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:42.100419    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:42.119254    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:42.119352    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:42.133266    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:42.133336    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:42.145059    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:42.145143    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:42.155859    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:42.155932    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:42.166889    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:42.166961    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:42.177346    4258 logs.go:276] 0 containers: []
	W0307 14:44:42.177359    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:42.177418    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:42.187926    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:42.187942    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:42.187947    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:42.225476    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:42.225489    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:42.240297    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:42.240308    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:42.258211    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:42.258223    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:42.276224    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:42.276234    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:42.287723    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:42.287733    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:42.299025    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:42.299036    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:42.311433    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:42.311442    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:42.330439    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:42.330448    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:42.354845    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:42.354853    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:42.369774    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:42.369789    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:42.374235    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:42.374242    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:42.409220    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:42.409234    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:42.446745    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:42.446755    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:42.461173    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:42.461184    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:42.472950    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:42.472963    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:42.487599    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:42.487610    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:45.000963    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:50.003271    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:50.003498    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:50.026906    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:50.027009    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:50.042788    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:50.042875    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:50.055696    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:50.055775    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:50.066976    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:50.067042    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:50.077922    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:50.077990    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:50.088271    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:50.088344    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:50.098636    4258 logs.go:276] 0 containers: []
	W0307 14:44:50.098646    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:50.098701    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:50.113729    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:50.113746    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:50.113750    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:50.126259    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:50.126274    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:50.164253    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:50.164267    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:50.178607    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:50.178618    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:50.193542    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:50.193553    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:50.205686    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:50.205696    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:50.220126    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:50.220138    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:50.240568    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:50.240578    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:50.258788    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:50.258801    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:50.297899    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:50.297909    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:44:50.310763    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:50.310774    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:50.322622    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:50.322634    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:50.327304    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:50.327310    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:50.352850    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:50.352860    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:50.390380    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:50.390391    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:50.406107    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:50.406124    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:50.431442    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:50.431451    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:52.944251    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:44:57.946981    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:44:57.947379    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:44:57.994297    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:44:57.994417    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:44:58.010352    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:44:58.010436    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:44:58.023355    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:44:58.023437    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:44:58.035296    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:44:58.035369    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:44:58.045746    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:44:58.045816    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:44:58.056099    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:44:58.056167    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:44:58.066490    4258 logs.go:276] 0 containers: []
	W0307 14:44:58.066502    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:44:58.066559    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:44:58.077222    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:44:58.077240    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:44:58.077246    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:44:58.081474    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:44:58.081485    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:44:58.120121    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:44:58.120134    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:44:58.138083    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:44:58.138095    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:44:58.174286    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:44:58.174299    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:44:58.187790    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:44:58.187800    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:44:58.201943    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:44:58.201957    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:44:58.219606    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:44:58.219619    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:44:58.231390    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:44:58.231401    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:44:58.256179    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:44:58.256188    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:44:58.267835    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:44:58.267845    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:44:58.285493    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:44:58.285504    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:44:58.297519    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:44:58.297529    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:44:58.334067    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:44:58.334075    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:44:58.349743    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:44:58.349760    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:44:58.360884    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:44:58.360896    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:44:58.372655    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:44:58.372666    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:00.889851    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:05.892187    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:05.892323    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:05.908621    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:05.908703    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:05.921514    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:05.921615    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:05.932702    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:05.932770    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:05.947903    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:05.947975    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:05.959116    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:05.959194    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:05.969590    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:05.969663    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:05.979970    4258 logs.go:276] 0 containers: []
	W0307 14:45:05.979980    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:05.980036    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:05.995992    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:05.996009    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:05.996014    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:06.007970    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:06.007981    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:06.031173    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:06.031181    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:06.043094    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:06.043104    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:06.078201    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:06.078215    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:06.092222    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:06.092231    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:06.104244    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:06.104256    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:06.122466    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:06.122477    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:06.160980    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:06.160995    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:06.165157    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:06.165164    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:06.180043    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:06.180055    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:06.194551    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:06.194564    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:06.205920    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:06.205932    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:06.218724    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:06.218735    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:06.255419    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:06.255430    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:06.266889    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:06.266899    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:06.281659    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:06.281669    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:08.795117    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:13.797612    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:13.797841    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:13.813863    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:13.813954    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:13.826437    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:13.826515    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:13.837789    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:13.837861    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:13.848474    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:13.848555    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:13.861554    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:13.861620    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:13.872385    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:13.872461    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:13.882960    4258 logs.go:276] 0 containers: []
	W0307 14:45:13.882970    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:13.883022    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:13.893466    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:13.893485    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:13.893490    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:13.915818    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:13.915828    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:13.927677    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:13.927691    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:13.939229    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:13.939240    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:13.957935    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:13.957947    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:13.970421    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:13.970431    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:14.008716    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:14.008726    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:14.022915    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:14.022926    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:14.034419    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:14.034431    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:14.072512    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:14.072524    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:14.084661    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:14.084674    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:14.103203    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:14.103214    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:14.117677    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:14.117686    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:14.128862    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:14.128876    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:14.145766    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:14.145776    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:14.149949    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:14.149955    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:14.184346    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:14.184359    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:16.700900    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:21.703158    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:21.703363    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:21.729520    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:21.729623    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:21.746982    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:21.747056    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:21.760063    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:21.760141    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:21.771751    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:21.771825    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:21.782184    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:21.782253    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:21.793045    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:21.793112    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:21.805884    4258 logs.go:276] 0 containers: []
	W0307 14:45:21.805892    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:21.805949    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:21.816118    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:21.816136    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:21.816141    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:21.852646    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:21.852657    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:21.866143    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:21.866156    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:21.880702    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:21.880713    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:21.891938    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:21.891951    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:21.909414    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:21.909423    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:21.921035    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:21.921048    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:21.925104    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:21.925111    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:21.939232    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:21.939243    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:21.954575    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:21.954587    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:21.966452    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:21.966463    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:21.989816    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:21.989825    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:22.024275    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:22.024287    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:22.038933    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:22.038942    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:22.050721    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:22.050732    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:22.089372    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:22.089381    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:22.100397    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:22.100407    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:24.615176    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:29.617623    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:29.617943    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:29.653062    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:29.653211    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:29.671349    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:29.671443    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:29.686605    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:29.686674    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:29.697624    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:29.697687    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:29.708447    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:29.708521    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:29.720232    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:29.720309    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:29.731067    4258 logs.go:276] 0 containers: []
	W0307 14:45:29.731079    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:29.731135    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:29.741955    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:29.741972    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:29.741977    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:29.777750    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:29.777761    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:29.794979    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:29.794989    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:29.806553    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:29.806563    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:29.818228    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:29.818240    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:29.832559    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:29.832570    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:29.847147    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:29.847157    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:29.862211    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:29.862226    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:29.900102    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:29.900110    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:29.913842    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:29.913852    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:29.925526    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:29.925536    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:29.949509    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:29.949519    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:29.953427    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:29.953433    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:29.992014    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:29.992024    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:30.002949    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:30.002960    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:30.017158    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:30.017172    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:30.034340    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:30.034350    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:32.548692    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:37.549325    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:37.549476    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:37.569762    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:37.569846    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:37.582891    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:37.582969    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:37.593543    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:37.593610    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:37.603758    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:37.603822    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:37.614714    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:37.614781    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:37.625707    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:37.625775    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:37.639182    4258 logs.go:276] 0 containers: []
	W0307 14:45:37.639196    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:37.639258    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:37.650207    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:37.650224    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:37.650231    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:37.684675    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:37.684689    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:37.698680    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:37.698693    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:37.716720    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:37.716731    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:37.755190    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:37.755203    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:37.770647    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:37.770660    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:37.782165    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:37.782176    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:37.793500    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:37.793514    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:37.804610    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:37.804628    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:37.808590    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:37.808598    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:37.823421    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:37.823430    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:37.835960    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:37.835971    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:37.847622    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:37.847637    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:37.863148    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:37.863158    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:37.903427    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:37.903438    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:37.917312    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:37.917322    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:37.929040    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:37.929050    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:40.454575    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:45.457310    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:45.457466    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:45.477369    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:45.477468    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:45.492294    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:45.492376    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:45.504647    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:45.504720    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:45.515165    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:45.515241    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:45.529589    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:45.529664    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:45.540149    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:45.540275    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:45.550153    4258 logs.go:276] 0 containers: []
	W0307 14:45:45.550166    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:45.550225    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:45.562118    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:45.562132    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:45.562138    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:45.566556    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:45.566562    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:45.580688    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:45.580696    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:45.592766    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:45.592774    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:45.610322    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:45.610333    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:45.622839    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:45.622853    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:45.660835    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:45.660843    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:45.674762    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:45.674772    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:45.711659    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:45.711669    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:45.725401    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:45.725416    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:45.747596    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:45.747602    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:45.759869    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:45.759880    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:45.772501    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:45.772513    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:45.786860    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:45.786870    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:45.798262    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:45.798272    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:45.809671    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:45.809686    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:45.844018    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:45.844027    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:48.358981    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:45:53.361457    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:45:53.361755    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:45:53.385735    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:45:53.385859    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:45:53.402705    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:45:53.402782    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:45:53.421182    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:45:53.421250    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:45:53.432194    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:45:53.432264    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:45:53.442624    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:45:53.442686    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:45:53.453414    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:45:53.453486    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:45:53.463521    4258 logs.go:276] 0 containers: []
	W0307 14:45:53.463533    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:45:53.463586    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:45:53.480889    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:45:53.480907    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:45:53.480912    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:45:53.498436    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:45:53.498446    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:45:53.513371    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:45:53.513380    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:45:53.526104    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:45:53.526115    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:45:53.540412    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:45:53.540425    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:45:53.580975    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:45:53.580987    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:45:53.595901    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:45:53.595911    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:45:53.607545    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:45:53.607556    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:45:53.619409    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:45:53.619419    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:45:53.658911    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:45:53.658924    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:45:53.663573    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:45:53.663578    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:45:53.682629    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:45:53.682639    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:45:53.693919    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:45:53.693931    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:45:53.716469    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:45:53.716476    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:45:53.756001    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:45:53.756011    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:45:53.767630    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:45:53.767641    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:45:53.779045    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:45:53.779055    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:45:56.292551    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:01.294873    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:01.294956    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:01.306312    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:46:01.306393    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:01.317047    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:46:01.317114    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:01.327168    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:46:01.327242    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:01.338387    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:46:01.338458    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:01.351786    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:46:01.351861    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:01.362086    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:46:01.362154    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:01.371964    4258 logs.go:276] 0 containers: []
	W0307 14:46:01.371975    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:01.372034    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:01.382641    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:46:01.382660    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:01.382666    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:01.386993    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:46:01.386999    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:46:01.404739    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:46:01.404752    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:46:01.416389    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:46:01.416400    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:46:01.427945    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:46:01.427956    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:01.440629    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:46:01.440639    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:46:01.455541    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:46:01.455552    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:46:01.493113    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:46:01.493123    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:46:01.510020    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:46:01.510032    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:46:01.521115    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:46:01.521126    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:46:01.540566    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:46:01.540578    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:46:01.552603    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:46:01.552613    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:46:01.564070    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:01.564081    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:46:01.601116    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:01.601123    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:01.641528    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:46:01.641542    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:46:01.657139    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:46:01.657151    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:46:01.676308    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:01.676318    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:04.203078    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:09.205386    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:09.205593    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:09.217570    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:46:09.217652    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:09.227791    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:46:09.227862    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:09.238788    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:46:09.238861    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:09.249412    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:46:09.249483    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:09.259309    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:46:09.259379    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:09.274212    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:46:09.274286    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:09.283759    4258 logs.go:276] 0 containers: []
	W0307 14:46:09.283770    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:09.283827    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:09.294230    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:46:09.294248    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:46:09.294254    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:46:09.305988    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:46:09.305999    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:46:09.317570    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:09.317584    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:09.357941    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:46:09.357952    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:46:09.375181    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:46:09.375193    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:46:09.406149    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:09.406168    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:09.430893    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:09.430908    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:46:09.467817    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:09.467829    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:09.472008    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:46:09.472014    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:46:09.483139    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:46:09.483150    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:46:09.495576    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:46:09.495588    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:46:09.506777    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:46:09.506790    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:09.518770    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:46:09.518784    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:46:09.532860    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:46:09.532869    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:46:09.570732    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:46:09.570744    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:46:09.585287    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:46:09.585299    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:46:09.600239    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:46:09.600250    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:46:12.113111    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:17.115473    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:17.115647    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:17.132568    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:46:17.132650    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:17.147921    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:46:17.147996    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:17.158339    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:46:17.158410    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:17.169509    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:46:17.169578    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:17.180434    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:46:17.180504    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:17.191300    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:46:17.191370    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:17.201805    4258 logs.go:276] 0 containers: []
	W0307 14:46:17.201818    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:17.201881    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:17.212079    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:46:17.212096    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:17.212101    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:17.234929    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:46:17.234937    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:46:17.249275    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:46:17.249287    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:46:17.263300    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:46:17.263312    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:46:17.275527    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:46:17.275536    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:46:17.287484    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:46:17.287494    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:46:17.298351    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:46:17.298362    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:17.315141    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:17.315152    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:17.319468    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:17.319478    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:17.353103    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:46:17.353115    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:46:17.367756    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:46:17.367768    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:46:17.399239    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:46:17.399249    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:46:17.412038    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:46:17.412048    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:46:17.423550    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:17.423561    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:46:17.460748    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:46:17.460761    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:46:17.476551    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:46:17.476560    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:46:17.516799    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:46:17.516810    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:46:20.033194    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:25.035575    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:25.035811    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:25.056743    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:46:25.056845    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:25.076053    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:46:25.076134    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:25.087849    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:46:25.087926    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:25.098554    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:46:25.098626    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:25.108895    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:46:25.108966    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:25.119522    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:46:25.119588    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:25.130140    4258 logs.go:276] 0 containers: []
	W0307 14:46:25.130151    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:25.130211    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:25.141613    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:46:25.141632    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:25.141638    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:46:25.179569    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:46:25.179579    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:46:25.224231    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:46:25.224242    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:46:25.238809    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:46:25.238823    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:46:25.253750    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:46:25.253760    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:46:25.267530    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:46:25.267540    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:46:25.278748    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:46:25.278761    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:46:25.289894    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:46:25.289906    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:46:25.301190    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:46:25.301200    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:46:25.312463    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:25.312474    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:25.347888    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:46:25.347899    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:46:25.361452    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:46:25.361461    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:46:25.378932    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:46:25.378942    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:46:25.391585    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:46:25.391596    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:46:25.403173    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:25.403184    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:25.426365    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:25.426374    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:25.430653    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:46:25.430658    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:27.944403    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:32.944874    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:32.945186    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:32.982912    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:46:32.983037    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:33.003008    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:46:33.003099    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:33.015659    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:46:33.015732    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:33.027002    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:46:33.027080    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:33.038107    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:46:33.038175    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:33.048612    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:46:33.048680    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:33.059099    4258 logs.go:276] 0 containers: []
	W0307 14:46:33.059112    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:33.059172    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:33.069542    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:46:33.069559    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:33.069564    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:46:33.106708    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:46:33.106716    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:46:33.121281    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:46:33.121293    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:46:33.133585    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:46:33.133595    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:46:33.145548    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:46:33.145559    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:33.157721    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:33.157733    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:33.163607    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:46:33.163615    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:46:33.178448    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:46:33.178459    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:46:33.196354    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:46:33.196366    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:46:33.209132    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:33.209145    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:33.230704    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:46:33.230712    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:46:33.272427    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:46:33.272437    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:46:33.286379    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:46:33.286390    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:46:33.300885    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:46:33.300896    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:46:33.315622    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:46:33.315632    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:46:33.327713    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:33.327724    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:33.362370    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:46:33.362383    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:46:35.886442    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:40.888801    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:40.889120    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:46:40.919735    4258 logs.go:276] 2 containers: [e910452bb885 2654012828a3]
	I0307 14:46:40.919841    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:46:40.938164    4258 logs.go:276] 2 containers: [d05052d3f851 dd38b5bb91df]
	I0307 14:46:40.938256    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:46:40.952563    4258 logs.go:276] 1 containers: [b69a339ebc1d]
	I0307 14:46:40.952647    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:46:40.972368    4258 logs.go:276] 2 containers: [2753acf40986 f3df7e225915]
	I0307 14:46:40.972436    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:46:40.984737    4258 logs.go:276] 1 containers: [53b8ead41e19]
	I0307 14:46:40.984808    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:46:40.995497    4258 logs.go:276] 2 containers: [3d044bffa4d5 e4e19e63a4f0]
	I0307 14:46:40.995563    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:46:41.005945    4258 logs.go:276] 0 containers: []
	W0307 14:46:41.005957    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:46:41.006016    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:46:41.016087    4258 logs.go:276] 2 containers: [66575a2be233 133953a51eec]
	I0307 14:46:41.016102    4258 logs.go:123] Gathering logs for kube-apiserver [e910452bb885] ...
	I0307 14:46:41.016107    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e910452bb885"
	I0307 14:46:41.030063    4258 logs.go:123] Gathering logs for etcd [d05052d3f851] ...
	I0307 14:46:41.030074    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d05052d3f851"
	I0307 14:46:41.044273    4258 logs.go:123] Gathering logs for storage-provisioner [133953a51eec] ...
	I0307 14:46:41.044286    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133953a51eec"
	I0307 14:46:41.056010    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:46:41.056022    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:46:41.068215    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:46:41.068225    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:46:41.102415    4258 logs.go:123] Gathering logs for etcd [dd38b5bb91df] ...
	I0307 14:46:41.102426    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd38b5bb91df"
	I0307 14:46:41.116827    4258 logs.go:123] Gathering logs for kube-proxy [53b8ead41e19] ...
	I0307 14:46:41.116839    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b8ead41e19"
	I0307 14:46:41.135276    4258 logs.go:123] Gathering logs for kube-controller-manager [3d044bffa4d5] ...
	I0307 14:46:41.135287    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d044bffa4d5"
	I0307 14:46:41.152824    4258 logs.go:123] Gathering logs for kube-apiserver [2654012828a3] ...
	I0307 14:46:41.152835    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2654012828a3"
	I0307 14:46:41.190405    4258 logs.go:123] Gathering logs for coredns [b69a339ebc1d] ...
	I0307 14:46:41.190419    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69a339ebc1d"
	I0307 14:46:41.202260    4258 logs.go:123] Gathering logs for kube-scheduler [2753acf40986] ...
	I0307 14:46:41.202272    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2753acf40986"
	I0307 14:46:41.213692    4258 logs.go:123] Gathering logs for kube-controller-manager [e4e19e63a4f0] ...
	I0307 14:46:41.213701    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e19e63a4f0"
	I0307 14:46:41.230187    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:46:41.230199    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:46:41.252238    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:46:41.252246    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:46:41.256807    4258 logs.go:123] Gathering logs for kube-scheduler [f3df7e225915] ...
	I0307 14:46:41.256820    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3df7e225915"
	I0307 14:46:41.272053    4258 logs.go:123] Gathering logs for storage-provisioner [66575a2be233] ...
	I0307 14:46:41.272065    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66575a2be233"
	I0307 14:46:41.283709    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:46:41.283723    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:46:43.822536    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:48.824893    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:46:48.824940    4258 kubeadm.go:591] duration metric: took 4m3.9415295s to restartPrimaryControlPlane
	W0307 14:46:48.824990    4258 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0307 14:46:48.825007    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0307 14:46:49.888830    4258 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.063809125s)
	I0307 14:46:49.888915    4258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 14:46:49.893702    4258 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 14:46:49.896853    4258 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 14:46:49.899747    4258 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 14:46:49.899754    4258 kubeadm.go:156] found existing configuration files:
	
	I0307 14:46:49.899793    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0307 14:46:49.902500    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 14:46:49.902539    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 14:46:49.905311    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0307 14:46:49.907908    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 14:46:49.907935    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 14:46:49.910528    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0307 14:46:49.913198    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 14:46:49.913222    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 14:46:49.915771    4258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0307 14:46:49.918247    4258 kubeadm.go:162] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 14:46:49.918271    4258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 14:46:49.921550    4258 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 14:46:49.939067    4258 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0307 14:46:49.939126    4258 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 14:46:49.994720    4258 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 14:46:49.994903    4258 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 14:46:49.995044    4258 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 14:46:50.050202    4258 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 14:46:50.055370    4258 out.go:204]   - Generating certificates and keys ...
	I0307 14:46:50.055404    4258 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 14:46:50.055434    4258 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 14:46:50.055478    4258 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 14:46:50.055516    4258 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0307 14:46:50.055557    4258 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 14:46:50.055587    4258 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0307 14:46:50.055623    4258 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0307 14:46:50.055657    4258 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0307 14:46:50.055706    4258 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 14:46:50.055744    4258 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 14:46:50.055766    4258 kubeadm.go:309] [certs] Using the existing "sa" key
	I0307 14:46:50.055796    4258 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 14:46:50.286405    4258 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 14:46:50.359465    4258 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 14:46:50.445586    4258 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 14:46:50.615872    4258 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 14:46:50.646038    4258 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 14:46:50.646350    4258 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 14:46:50.646389    4258 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 14:46:50.715742    4258 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 14:46:50.719938    4258 out.go:204]   - Booting up control plane ...
	I0307 14:46:50.720098    4258 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 14:46:50.720174    4258 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 14:46:50.720220    4258 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 14:46:50.720264    4258 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 14:46:50.720346    4258 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 14:46:55.224161    4258 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504878 seconds
	I0307 14:46:55.224224    4258 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 14:46:55.227984    4258 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 14:46:55.749107    4258 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 14:46:55.749433    4258 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-491000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 14:46:56.252933    4258 kubeadm.go:309] [bootstrap-token] Using token: 8ib4o4.pzgh3dxctohekoxl
	I0307 14:46:56.259535    4258 out.go:204]   - Configuring RBAC rules ...
	I0307 14:46:56.259603    4258 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 14:46:56.259662    4258 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 14:46:56.261693    4258 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 14:46:56.266081    4258 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 14:46:56.266904    4258 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 14:46:56.267974    4258 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 14:46:56.271269    4258 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 14:46:56.437841    4258 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 14:46:56.660214    4258 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 14:46:56.660707    4258 kubeadm.go:309] 
	I0307 14:46:56.660738    4258 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 14:46:56.660742    4258 kubeadm.go:309] 
	I0307 14:46:56.660783    4258 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 14:46:56.660786    4258 kubeadm.go:309] 
	I0307 14:46:56.660796    4258 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 14:46:56.660834    4258 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 14:46:56.660863    4258 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 14:46:56.660866    4258 kubeadm.go:309] 
	I0307 14:46:56.660894    4258 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 14:46:56.660898    4258 kubeadm.go:309] 
	I0307 14:46:56.660924    4258 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 14:46:56.660929    4258 kubeadm.go:309] 
	I0307 14:46:56.660953    4258 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 14:46:56.660991    4258 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 14:46:56.661026    4258 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 14:46:56.661029    4258 kubeadm.go:309] 
	I0307 14:46:56.661068    4258 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 14:46:56.661114    4258 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 14:46:56.661120    4258 kubeadm.go:309] 
	I0307 14:46:56.661159    4258 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 8ib4o4.pzgh3dxctohekoxl \
	I0307 14:46:56.661232    4258 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dc223c8f61ede9166cd6af749ad18625b70792e87215b933ce92e49a8a2f5e5b \
	I0307 14:46:56.661246    4258 kubeadm.go:309] 	--control-plane 
	I0307 14:46:56.661249    4258 kubeadm.go:309] 
	I0307 14:46:56.661298    4258 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 14:46:56.661303    4258 kubeadm.go:309] 
	I0307 14:46:56.661359    4258 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 8ib4o4.pzgh3dxctohekoxl \
	I0307 14:46:56.661412    4258 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dc223c8f61ede9166cd6af749ad18625b70792e87215b933ce92e49a8a2f5e5b 
	I0307 14:46:56.661655    4258 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 14:46:56.661772    4258 cni.go:84] Creating CNI manager for ""
	I0307 14:46:56.661781    4258 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:46:56.668217    4258 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 14:46:56.676236    4258 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 14:46:56.679220    4258 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 14:46:56.684294    4258 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 14:46:56.684349    4258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 14:46:56.684357    4258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-491000 minikube.k8s.io/updated_at=2024_03_07T14_46_56_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3e3656b8cff33aafa60dd2a07a4b34bce666a6a6 minikube.k8s.io/name=stopped-upgrade-491000 minikube.k8s.io/primary=true
	I0307 14:46:56.687256    4258 ops.go:34] apiserver oom_adj: -16
	I0307 14:46:56.725516    4258 kubeadm.go:1106] duration metric: took 41.2035ms to wait for elevateKubeSystemPrivileges
	W0307 14:46:56.725554    4258 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 14:46:56.725560    4258 kubeadm.go:393] duration metric: took 4m11.8557655s to StartCluster
	I0307 14:46:56.725570    4258 settings.go:142] acquiring lock: {Name:mke1b21879aa30c53aba22e595b1eacff0880bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:46:56.725678    4258 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:46:56.726107    4258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/kubeconfig: {Name:mk8faa4add6483df163f8e33c2463984a39a544c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:46:56.726314    4258 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:46:56.729138    4258 out.go:177] * Verifying Kubernetes components...
	I0307 14:46:56.726367    4258 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 14:46:56.726485    4258 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:46:56.737222    4258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 14:46:56.737238    4258 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-491000"
	I0307 14:46:56.737241    4258 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-491000"
	I0307 14:46:56.737252    4258 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-491000"
	W0307 14:46:56.737255    4258 addons.go:243] addon storage-provisioner should already be in state true
	I0307 14:46:56.737256    4258 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-491000"
	I0307 14:46:56.737266    4258 host.go:66] Checking if "stopped-upgrade-491000" exists ...
	I0307 14:46:56.738663    4258 kapi.go:59] client config for stopped-upgrade-491000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/stopped-upgrade-491000/client.key", CAFile:"/Users/jenkins/minikube-integration/18320-1153/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c7b6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 14:46:56.738778    4258 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-491000"
	W0307 14:46:56.738783    4258 addons.go:243] addon default-storageclass should already be in state true
	I0307 14:46:56.738791    4258 host.go:66] Checking if "stopped-upgrade-491000" exists ...
	I0307 14:46:56.743164    4258 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 14:46:56.739468    4258 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 14:46:56.747162    4258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 14:46:56.747170    4258 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/id_rsa Username:docker}
	I0307 14:46:56.747208    4258 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 14:46:56.747214    4258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 14:46:56.747218    4258 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/stopped-upgrade-491000/id_rsa Username:docker}
	W0307 14:46:56.747822    4258 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50688->127.0.0.1:50471: read: connection reset by peer
	I0307 14:46:56.747840    4258 retry.go:31] will retry after 270.011748ms: ssh: handshake failed: read tcp 127.0.0.1:50688->127.0.0.1:50471: read: connection reset by peer
	I0307 14:46:56.812726    4258 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 14:46:56.817356    4258 api_server.go:52] waiting for apiserver process to appear ...
	I0307 14:46:56.817398    4258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 14:46:56.820948    4258 api_server.go:72] duration metric: took 94.623416ms to wait for apiserver process to appear ...
	I0307 14:46:56.820955    4258 api_server.go:88] waiting for apiserver healthz status ...
	I0307 14:46:56.820961    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:46:56.867347    4258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 14:46:57.055196    4258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 14:47:01.823166    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:01.823231    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:06.823842    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:06.823906    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:11.824444    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:11.824497    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:16.824962    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:16.824999    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:21.825869    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:21.825905    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:26.826859    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:26.826895    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0307 14:47:27.198770    4258 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0307 14:47:27.203824    4258 out.go:177] * Enabled addons: storage-provisioner
	I0307 14:47:27.215736    4258 addons.go:505] duration metric: took 30.489340917s for enable addons: enabled=[storage-provisioner]
	I0307 14:47:31.828183    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:31.828285    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:36.830170    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:36.830213    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:41.832403    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:41.832495    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:46.833865    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:46.833887    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:51.836119    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:51.836167    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:47:56.838593    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:47:56.838792    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:47:56.854314    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:47:56.854417    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:47:56.866551    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:47:56.866618    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:47:56.877131    4258 logs.go:276] 2 containers: [dc33544dead5 cb97979a251a]
	I0307 14:47:56.877201    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:47:56.887713    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:47:56.887778    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:47:56.898306    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:47:56.898370    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:47:56.908753    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:47:56.908830    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:47:56.919241    4258 logs.go:276] 0 containers: []
	W0307 14:47:56.919253    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:47:56.919311    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:47:56.929786    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:47:56.929806    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:47:56.929811    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:47:56.946097    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:47:56.946107    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:47:56.957630    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:47:56.957643    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:47:56.994159    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:47:56.994169    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:47:56.998299    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:47:56.998305    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:47:57.033350    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:47:57.033362    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:47:57.048478    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:47:57.048490    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:47:57.063366    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:47:57.063376    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:47:57.075412    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:47:57.075426    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:47:57.087384    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:47:57.087395    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:47:57.112246    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:47:57.112255    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:47:57.125036    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:47:57.125046    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:47:57.137385    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:47:57.137396    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:47:59.657490    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:48:04.659080    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:48:04.659208    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:48:04.672749    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:48:04.672830    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:48:04.683596    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:48:04.683667    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:48:04.693879    4258 logs.go:276] 2 containers: [dc33544dead5 cb97979a251a]
	I0307 14:48:04.693945    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:48:04.704337    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:48:04.704409    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:48:04.715601    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:48:04.715669    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:48:04.729435    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:48:04.729509    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:48:04.739779    4258 logs.go:276] 0 containers: []
	W0307 14:48:04.739791    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:48:04.739852    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:48:04.750569    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:48:04.750583    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:48:04.750588    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:48:04.762164    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:48:04.762175    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:48:04.779138    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:48:04.779147    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:48:04.814694    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:48:04.814702    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:48:04.819014    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:48:04.819023    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:48:04.854325    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:48:04.854337    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:48:04.866334    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:48:04.866343    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:48:04.877776    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:48:04.877788    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:48:04.889360    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:48:04.889371    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:48:04.903328    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:48:04.903337    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:48:04.919184    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:48:04.919195    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:48:04.934142    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:48:04.934151    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:48:04.945365    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:48:04.945378    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:48:07.470684    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:48:12.472906    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:48:12.473257    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:48:12.505473    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:48:12.505603    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:48:12.524860    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:48:12.524947    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:48:12.539048    4258 logs.go:276] 2 containers: [dc33544dead5 cb97979a251a]
	I0307 14:48:12.539121    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:48:12.550144    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:48:12.550216    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:48:12.561265    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:48:12.561336    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:48:12.571605    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:48:12.571676    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:48:12.582246    4258 logs.go:276] 0 containers: []
	W0307 14:48:12.582258    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:48:12.582316    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:48:12.592817    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:48:12.592831    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:48:12.592836    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:48:12.604598    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:48:12.604610    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:48:12.628511    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:48:12.628521    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:48:12.640271    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:48:12.640285    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:48:12.645199    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:48:12.645207    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:48:12.679071    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:48:12.679082    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:48:12.694173    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:48:12.694183    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:48:12.709600    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:48:12.709610    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:48:12.727391    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:48:12.727406    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:48:12.739777    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:48:12.739788    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:48:12.774762    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:48:12.774770    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:48:12.790777    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:48:12.790788    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:48:12.802889    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:48:12.802899    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:48:15.320100    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:48:20.322079    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:48:20.322306    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:48:20.345191    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:48:20.345311    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:48:20.366034    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:48:20.366127    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:48:20.382450    4258 logs.go:276] 2 containers: [dc33544dead5 cb97979a251a]
	I0307 14:48:20.382524    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:48:20.397262    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:48:20.397329    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:48:20.407655    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:48:20.407728    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:48:20.417910    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:48:20.417984    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:48:20.428252    4258 logs.go:276] 0 containers: []
	W0307 14:48:20.428268    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:48:20.428322    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:48:20.438627    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:48:20.438640    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:48:20.438645    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:48:20.474492    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:48:20.474503    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:48:20.488745    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:48:20.488757    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:48:20.502277    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:48:20.502290    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:48:20.513957    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:48:20.513971    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:48:20.540878    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:48:20.540892    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:48:20.552593    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:48:20.552604    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:48:20.588021    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:48:20.588030    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:48:20.592454    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:48:20.592466    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:48:20.610963    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:48:20.610973    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:48:20.625014    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:48:20.625024    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:48:20.648986    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:48:20.648996    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:48:20.661073    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:48:20.661086    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:48:23.175519    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:48:28.178155    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:48:28.178626    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:48:28.224258    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:48:28.224397    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:48:28.244562    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:48:28.244674    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:48:28.258565    4258 logs.go:276] 2 containers: [dc33544dead5 cb97979a251a]
	I0307 14:48:28.258639    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:48:28.270375    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:48:28.270448    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:48:28.280478    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:48:28.280543    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:48:28.291708    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:48:28.291778    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:48:28.301815    4258 logs.go:276] 0 containers: []
	W0307 14:48:28.301829    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:48:28.301884    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:48:28.320271    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:48:28.320288    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:48:28.320296    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:48:28.339014    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:48:28.339025    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:48:28.364431    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:48:28.364441    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:48:28.404554    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:48:28.404566    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:48:28.417810    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:48:28.417818    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:48:28.429630    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:48:28.429643    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:48:28.446600    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:48:28.446611    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:48:28.461800    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:48:28.461813    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:48:28.475314    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:48:28.475327    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:48:28.492448    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:48:28.492465    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:48:28.529239    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:48:28.529251    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:48:28.533335    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:48:28.533342    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:48:28.547313    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:48:28.547323    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:48:31.060691    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:48:36.063001    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:48:36.063094    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:48:36.079178    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:48:36.079255    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:48:36.090864    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:48:36.090931    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:48:36.103294    4258 logs.go:276] 2 containers: [dc33544dead5 cb97979a251a]
	I0307 14:48:36.103394    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:48:36.120627    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:48:36.120703    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:48:36.132936    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:48:36.133019    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:48:36.144492    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:48:36.144564    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:48:36.156058    4258 logs.go:276] 0 containers: []
	W0307 14:48:36.156069    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:48:36.156172    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:48:36.173206    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:48:36.173221    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:48:36.173226    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:48:36.211635    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:48:36.211643    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:48:36.248798    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:48:36.248809    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:48:36.263242    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:48:36.263253    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:48:36.274932    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:48:36.274945    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:48:36.290572    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:48:36.290585    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:48:36.301827    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:48:36.301839    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:48:36.313426    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:48:36.317297    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:48:36.340426    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:48:36.340435    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:48:36.351401    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:48:36.351413    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:48:36.356140    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:48:36.356148    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:48:36.370290    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:48:36.370300    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:48:36.381666    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:48:36.381677    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:48:38.901043    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:48:43.903428    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:48:43.903617    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:48:43.924037    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:48:43.924144    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:48:43.938072    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:48:43.938154    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:48:43.950018    4258 logs.go:276] 2 containers: [dc33544dead5 cb97979a251a]
	I0307 14:48:43.950086    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:48:43.960787    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:48:43.960852    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:48:43.970910    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:48:43.970975    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:48:43.981537    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:48:43.981609    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:48:43.991791    4258 logs.go:276] 0 containers: []
	W0307 14:48:43.991803    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:48:43.991856    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:48:44.002568    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:48:44.002583    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:48:44.002588    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:48:44.015433    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:48:44.015442    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:48:44.051620    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:48:44.051628    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:48:44.055932    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:48:44.055939    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:48:44.069484    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:48:44.069496    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:48:44.080771    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:48:44.080783    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:48:44.098550    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:48:44.098561    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:48:44.113799    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:48:44.113812    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:48:44.130531    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:48:44.130541    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:48:44.153901    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:48:44.153913    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:48:44.165484    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:48:44.165494    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:48:44.198971    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:48:44.198983    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:48:44.220868    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:48:44.220880    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:48:46.733856    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:48:51.736229    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:48:51.736296    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:48:51.746887    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:48:51.746954    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:48:51.756997    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:48:51.757066    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:48:51.767148    4258 logs.go:276] 2 containers: [dc33544dead5 cb97979a251a]
	I0307 14:48:51.767212    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:48:51.777773    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:48:51.777835    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:48:51.787909    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:48:51.787965    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:48:51.798112    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:48:51.798175    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:48:51.812367    4258 logs.go:276] 0 containers: []
	W0307 14:48:51.812378    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:48:51.812424    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:48:51.822553    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:48:51.822568    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:48:51.822575    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:48:51.835095    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:48:51.835108    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:48:51.847103    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:48:51.847113    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:48:51.863896    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:48:51.863905    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:48:51.876024    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:48:51.876035    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:48:51.880660    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:48:51.880671    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:48:51.914470    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:48:51.914482    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:48:51.928795    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:48:51.928804    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:48:51.940803    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:48:51.940814    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:48:51.965062    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:48:51.965070    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:48:52.000904    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:48:52.000912    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:48:52.014728    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:48:52.014738    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:48:52.029759    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:48:52.029770    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:48:54.543337    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:48:59.545687    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:48:59.545869    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:48:59.557699    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:48:59.557774    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:48:59.568595    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:48:59.568666    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:48:59.579061    4258 logs.go:276] 3 containers: [f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:48:59.579133    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:48:59.589204    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:48:59.589268    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:48:59.605712    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:48:59.605774    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:48:59.616222    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:48:59.616279    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:48:59.629485    4258 logs.go:276] 0 containers: []
	W0307 14:48:59.629496    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:48:59.629560    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:48:59.640031    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:48:59.640055    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:48:59.640060    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:48:59.651359    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:48:59.651370    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:48:59.663273    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:48:59.663284    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:48:59.675037    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:48:59.675048    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:48:59.711239    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:48:59.711247    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:48:59.746627    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:48:59.746639    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:48:59.757789    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:48:59.757802    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:48:59.761813    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:48:59.761822    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:48:59.778572    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:48:59.778583    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:48:59.790484    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:48:59.790497    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:48:59.804774    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:48:59.804786    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:48:59.823658    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:48:59.823670    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:48:59.835640    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:48:59.835653    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:48:59.858930    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:48:59.858942    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:49:02.386959    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:49:07.389232    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:49:07.389645    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:49:07.429748    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:49:07.429881    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:49:07.451528    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:49:07.451639    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:49:07.467872    4258 logs.go:276] 3 containers: [f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:49:07.467948    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:49:07.479753    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:49:07.479831    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:49:07.489800    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:49:07.489869    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:49:07.500078    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:49:07.500159    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:49:07.510502    4258 logs.go:276] 0 containers: []
	W0307 14:49:07.510514    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:49:07.510576    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:49:07.520509    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:49:07.520524    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:49:07.520531    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:49:07.555304    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:49:07.555316    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:49:07.566673    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:49:07.566686    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:49:07.582132    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:49:07.582142    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:49:07.595792    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:49:07.595803    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:49:07.609742    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:49:07.609752    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:49:07.633379    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:49:07.633390    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:49:07.637582    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:49:07.637589    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:49:07.654278    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:49:07.654288    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:49:07.665396    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:49:07.665406    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:49:07.676346    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:49:07.676360    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:49:07.711312    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:49:07.711321    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:49:07.723273    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:49:07.723287    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:49:07.738356    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:49:07.738369    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:49:10.252159    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:49:15.254886    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:49:15.255244    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:49:15.295295    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:49:15.295422    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:49:15.317353    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:49:15.317464    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:49:15.333055    4258 logs.go:276] 4 containers: [57160aac0930 f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:49:15.333136    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:49:15.345114    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:49:15.345184    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:49:15.355207    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:49:15.355276    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:49:15.365836    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:49:15.365905    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:49:15.376285    4258 logs.go:276] 0 containers: []
	W0307 14:49:15.376297    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:49:15.376357    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:49:15.386545    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:49:15.386562    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:49:15.386567    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:49:15.390896    4258 logs.go:123] Gathering logs for coredns [57160aac0930] ...
	I0307 14:49:15.390904    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57160aac0930"
	I0307 14:49:15.402215    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:49:15.402225    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:49:15.413467    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:49:15.413478    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:49:15.425543    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:49:15.425556    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:49:15.442688    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:49:15.442698    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:49:15.454139    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:49:15.454149    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:49:15.477971    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:49:15.477982    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:49:15.512318    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:49:15.512325    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:49:15.523443    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:49:15.523452    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:49:15.537651    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:49:15.537660    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:49:15.554801    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:49:15.554811    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:49:15.571503    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:49:15.571512    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:49:15.585834    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:49:15.585847    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:49:15.597441    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:49:15.597452    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:49:18.134252    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:49:23.136825    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:49:23.137076    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:49:23.169820    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:49:23.169937    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:49:23.186664    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:49:23.186739    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:49:23.199789    4258 logs.go:276] 4 containers: [57160aac0930 f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:49:23.199868    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:49:23.211167    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:49:23.211236    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:49:23.230583    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:49:23.230650    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:49:23.241307    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:49:23.241376    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:49:23.253476    4258 logs.go:276] 0 containers: []
	W0307 14:49:23.253486    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:49:23.253537    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:49:23.263837    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:49:23.263857    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:49:23.263861    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:49:23.278806    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:49:23.278815    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:49:23.290644    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:49:23.290659    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:49:23.331596    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:49:23.331610    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:49:23.346208    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:49:23.346218    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:49:23.360999    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:49:23.361010    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:49:23.372804    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:49:23.372815    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:49:23.396300    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:49:23.396312    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:49:23.430801    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:49:23.430810    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:49:23.442410    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:49:23.442418    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:49:23.466967    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:49:23.466984    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:49:23.478530    4258 logs.go:123] Gathering logs for coredns [57160aac0930] ...
	I0307 14:49:23.478541    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57160aac0930"
	I0307 14:49:23.489722    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:49:23.489730    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:49:23.501014    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:49:23.501025    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:49:23.512745    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:49:23.512754    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:49:26.019019    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:49:31.021732    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:49:31.022102    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:49:31.060095    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:49:31.060215    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:49:31.081979    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:49:31.082072    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:49:31.096887    4258 logs.go:276] 4 containers: [57160aac0930 f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:49:31.096951    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:49:31.110070    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:49:31.110142    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:49:31.120404    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:49:31.120472    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:49:31.131360    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:49:31.131423    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:49:31.145844    4258 logs.go:276] 0 containers: []
	W0307 14:49:31.145855    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:49:31.145914    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:49:31.156570    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:49:31.156591    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:49:31.156595    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:49:31.172589    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:49:31.172599    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:49:31.208491    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:49:31.208507    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:49:31.236041    4258 logs.go:123] Gathering logs for coredns [57160aac0930] ...
	I0307 14:49:31.236053    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57160aac0930"
	I0307 14:49:31.248104    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:49:31.248114    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:49:31.262591    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:49:31.262604    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:49:31.286738    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:49:31.286746    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:49:31.300297    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:49:31.300309    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:49:31.312404    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:49:31.314099    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:49:31.327171    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:49:31.327184    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:49:31.344474    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:49:31.344484    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:49:31.356526    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:49:31.356538    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:49:31.391837    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:49:31.391848    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:49:31.405784    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:49:31.405793    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:49:31.421176    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:49:31.421186    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:49:33.927447    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:49:38.929744    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:49:38.929846    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:49:38.944974    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:49:38.945040    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:49:38.957075    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:49:38.957127    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:49:38.969626    4258 logs.go:276] 4 containers: [57160aac0930 f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:49:38.969685    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:49:38.980327    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:49:38.980382    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:49:38.991453    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:49:38.991512    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:49:39.008674    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:49:39.008760    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:49:39.020819    4258 logs.go:276] 0 containers: []
	W0307 14:49:39.020830    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:49:39.020860    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:49:39.032386    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:49:39.032405    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:49:39.032411    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:49:39.037048    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:49:39.037058    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:49:39.074517    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:49:39.074528    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:49:39.088917    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:49:39.088927    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:49:39.100822    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:49:39.100834    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:49:39.126692    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:49:39.126707    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:49:39.164393    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:49:39.164402    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:49:39.182258    4258 logs.go:123] Gathering logs for coredns [57160aac0930] ...
	I0307 14:49:39.182266    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57160aac0930"
	I0307 14:49:39.193529    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:49:39.193539    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:49:39.205769    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:49:39.205779    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:49:39.225156    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:49:39.225164    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:49:39.237288    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:49:39.237300    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:49:39.250076    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:49:39.250090    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:49:39.263127    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:49:39.263138    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:49:39.290524    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:49:39.290539    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:49:41.815830    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:49:46.818497    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:49:46.818657    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:49:46.830194    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:49:46.830260    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:49:46.840760    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:49:46.840826    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:49:46.851587    4258 logs.go:276] 4 containers: [57160aac0930 f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:49:46.851656    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:49:46.863741    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:49:46.863813    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:49:46.874958    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:49:46.875026    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:49:46.890416    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:49:46.890487    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:49:46.909519    4258 logs.go:276] 0 containers: []
	W0307 14:49:46.909530    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:49:46.909589    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:49:46.920274    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:49:46.920293    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:49:46.920299    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:49:46.935018    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:49:46.935030    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:49:46.961155    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:49:46.961164    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:49:46.965366    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:49:46.965371    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:49:46.976731    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:49:46.976742    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:49:46.992399    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:49:46.992409    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:49:47.004370    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:49:47.004382    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:49:47.018646    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:49:47.018657    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:49:47.030525    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:49:47.030536    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:49:47.042156    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:49:47.042166    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:49:47.059547    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:49:47.059560    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:49:47.071265    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:49:47.071274    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:49:47.107538    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:49:47.107547    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:49:47.140996    4258 logs.go:123] Gathering logs for coredns [57160aac0930] ...
	I0307 14:49:47.141004    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57160aac0930"
	I0307 14:49:47.153504    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:49:47.153513    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:49:49.667425    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:49:54.669123    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:49:54.669594    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:49:54.705979    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:49:54.706111    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:49:54.727937    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:49:54.728041    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:49:54.743641    4258 logs.go:276] 4 containers: [57160aac0930 f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:49:54.743720    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:49:54.755768    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:49:54.755849    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:49:54.766703    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:49:54.766763    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:49:54.778506    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:49:54.778567    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:49:54.788636    4258 logs.go:276] 0 containers: []
	W0307 14:49:54.788648    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:49:54.788704    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:49:54.803162    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:49:54.803181    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:49:54.803186    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:49:54.839696    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:49:54.839704    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:49:54.844287    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:49:54.844294    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:49:54.883064    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:49:54.883077    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:49:54.907511    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:49:54.907525    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:49:54.921889    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:49:54.921903    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:49:54.934938    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:49:54.934947    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:49:54.949300    4258 logs.go:123] Gathering logs for coredns [57160aac0930] ...
	I0307 14:49:54.949313    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57160aac0930"
	I0307 14:49:54.960686    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:49:54.960696    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:49:54.978587    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:49:54.978600    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:49:54.997055    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:49:54.997067    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:49:55.021364    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:49:55.021370    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:49:55.035363    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:49:55.035374    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:49:55.047546    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:49:55.047559    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:49:55.058787    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:49:55.058799    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:49:57.571528    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:50:02.574224    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:50:02.574322    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:50:02.585865    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:50:02.585936    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:50:02.598536    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:50:02.598614    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:50:02.611809    4258 logs.go:276] 4 containers: [57160aac0930 f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:50:02.611870    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:50:02.625206    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:50:02.625272    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:50:02.638583    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:50:02.638726    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:50:02.652036    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:50:02.652083    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:50:02.662828    4258 logs.go:276] 0 containers: []
	W0307 14:50:02.662840    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:50:02.662891    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:50:02.674133    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:50:02.674146    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:50:02.674153    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:50:02.690726    4258 logs.go:123] Gathering logs for coredns [57160aac0930] ...
	I0307 14:50:02.690738    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57160aac0930"
	I0307 14:50:02.703094    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:50:02.703105    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:50:02.715469    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:50:02.715481    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:50:02.729451    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:50:02.729465    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:50:02.752564    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:50:02.752579    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:50:02.768243    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:50:02.768254    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:50:02.781711    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:50:02.781721    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:50:02.801081    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:50:02.801095    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:50:02.814547    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:50:02.814562    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:50:02.828536    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:50:02.828545    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:50:02.840062    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:50:02.840073    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:50:02.865519    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:50:02.865531    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:50:02.901880    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:50:02.901901    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:50:02.906875    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:50:02.906889    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:50:05.446647    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:50:10.449379    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:50:10.449790    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:50:10.485682    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:50:10.485811    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:50:10.506033    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:50:10.506132    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:50:10.522007    4258 logs.go:276] 4 containers: [57160aac0930 f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:50:10.522076    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:50:10.535580    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:50:10.535661    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:50:10.546273    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:50:10.546347    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:50:10.561278    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:50:10.561346    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:50:10.578585    4258 logs.go:276] 0 containers: []
	W0307 14:50:10.578597    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:50:10.578654    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:50:10.589022    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:50:10.589040    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:50:10.589045    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:50:10.601276    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:50:10.601287    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:50:10.619134    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:50:10.619147    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:50:10.631076    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:50:10.631086    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:50:10.649767    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:50:10.649778    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:50:10.661487    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:50:10.661498    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:50:10.682623    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:50:10.682634    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:50:10.720525    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:50:10.720538    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:50:10.734642    4258 logs.go:123] Gathering logs for coredns [57160aac0930] ...
	I0307 14:50:10.734654    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57160aac0930"
	I0307 14:50:10.746369    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:50:10.746380    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:50:10.758134    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:50:10.758146    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:50:10.782664    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:50:10.782671    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:50:10.820310    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:50:10.820320    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:50:10.824420    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:50:10.824426    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:50:10.841602    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:50:10.841616    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:50:13.354245    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:50:18.355122    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:50:18.355482    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:50:18.396568    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:50:18.396690    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:50:18.418010    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:50:18.418120    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:50:18.433616    4258 logs.go:276] 4 containers: [57160aac0930 f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:50:18.433690    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:50:18.446546    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:50:18.446621    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:50:18.465422    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:50:18.465488    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:50:18.475677    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:50:18.475734    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:50:18.486107    4258 logs.go:276] 0 containers: []
	W0307 14:50:18.486116    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:50:18.486165    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:50:18.496362    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:50:18.496378    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:50:18.496383    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:50:18.509961    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:50:18.509974    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:50:18.524052    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:50:18.524063    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:50:18.559988    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:50:18.560000    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:50:18.571717    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:50:18.571728    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:50:18.583534    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:50:18.583548    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:50:18.598651    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:50:18.598661    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:50:18.611108    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:50:18.611117    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:50:18.629961    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:50:18.629972    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:50:18.665923    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:50:18.665930    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:50:18.670122    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:50:18.670131    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:50:18.684235    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:50:18.684246    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:50:18.695758    4258 logs.go:123] Gathering logs for coredns [57160aac0930] ...
	I0307 14:50:18.695772    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57160aac0930"
	I0307 14:50:18.707235    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:50:18.707243    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:50:18.718653    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:50:18.718663    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:50:21.245401    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:50:26.246266    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:50:26.246372    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:50:26.258309    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:50:26.258398    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:50:26.270170    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:50:26.270230    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:50:26.282509    4258 logs.go:276] 4 containers: [57160aac0930 f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:50:26.282595    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:50:26.295240    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:50:26.295445    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:50:26.307003    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:50:26.307061    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:50:26.318384    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:50:26.318444    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:50:26.329763    4258 logs.go:276] 0 containers: []
	W0307 14:50:26.329775    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:50:26.329823    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:50:26.341223    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:50:26.341243    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:50:26.341249    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:50:26.360073    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:50:26.360085    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:50:26.372477    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:50:26.372488    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:50:26.409695    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:50:26.409710    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:50:26.430397    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:50:26.430408    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:50:26.444200    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:50:26.444212    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:50:26.460051    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:50:26.460059    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:50:26.472517    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:50:26.472529    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:50:26.497444    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:50:26.497457    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:50:26.513656    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:50:26.513667    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:50:26.531214    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:50:26.531230    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:50:26.536632    4258 logs.go:123] Gathering logs for coredns [57160aac0930] ...
	I0307 14:50:26.536642    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57160aac0930"
	I0307 14:50:26.548840    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:50:26.548849    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:50:26.560821    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:50:26.560834    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:50:26.596558    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:50:26.596568    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:50:29.113851    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:50:34.123219    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:50:34.123452    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:50:34.162290    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:50:34.162424    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:50:34.186407    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:50:34.186520    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:50:34.202912    4258 logs.go:276] 4 containers: [57160aac0930 f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:50:34.202987    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:50:34.215928    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:50:34.215990    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:50:34.226679    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:50:34.226750    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:50:34.237432    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:50:34.237492    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:50:34.247749    4258 logs.go:276] 0 containers: []
	W0307 14:50:34.247762    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:50:34.247826    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:50:34.258198    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:50:34.258212    4258 logs.go:123] Gathering logs for coredns [57160aac0930] ...
	I0307 14:50:34.258217    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57160aac0930"
	I0307 14:50:34.271025    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:50:34.271036    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:50:34.286845    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:50:34.286854    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:50:34.298401    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:50:34.298412    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:50:34.312995    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:50:34.313004    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:50:34.324238    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:50:34.324264    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:50:34.369328    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:50:34.369341    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:50:34.393000    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:50:34.393014    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:50:34.407832    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:50:34.407842    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:50:34.419275    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:50:34.419287    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:50:34.442228    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:50:34.442236    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:50:34.476810    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:50:34.476818    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:50:34.497496    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:50:34.497507    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:50:34.509591    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:50:34.509605    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:50:34.527386    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:50:34.527400    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:50:37.037445    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:50:42.044336    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:50:42.045326    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:50:42.081307    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:50:42.081436    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:50:42.102311    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:50:42.102426    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:50:42.118008    4258 logs.go:276] 4 containers: [57160aac0930 f70b6cc04e8a dc33544dead5 cb97979a251a]
	I0307 14:50:42.118085    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:50:42.130014    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:50:42.130082    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:50:42.140735    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:50:42.140800    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:50:42.151128    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:50:42.151196    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:50:42.161281    4258 logs.go:276] 0 containers: []
	W0307 14:50:42.161294    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:50:42.161354    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:50:42.172486    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:50:42.172506    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:50:42.172512    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:50:42.187422    4258 logs.go:123] Gathering logs for coredns [cb97979a251a] ...
	I0307 14:50:42.187432    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb97979a251a"
	I0307 14:50:42.199712    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:50:42.199722    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:50:42.217591    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:50:42.217601    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:50:42.229895    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:50:42.229908    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:50:42.241366    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:50:42.241376    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:50:42.278040    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:50:42.278047    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:50:42.292977    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:50:42.292988    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:50:42.308271    4258 logs.go:123] Gathering logs for coredns [57160aac0930] ...
	I0307 14:50:42.308282    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57160aac0930"
	I0307 14:50:42.320012    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:50:42.320020    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:50:42.336154    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:50:42.336167    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:50:42.359999    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:50:42.360008    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:50:42.364109    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:50:42.364116    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:50:42.398849    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:50:42.398858    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:50:42.412375    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:50:42.412387    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:50:44.928496    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:50:49.933917    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:50:49.934025    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 14:50:49.945661    4258 logs.go:276] 1 containers: [643ebcb8d1b5]
	I0307 14:50:49.945723    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 14:50:49.957491    4258 logs.go:276] 1 containers: [1d4ffc33c618]
	I0307 14:50:49.957577    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 14:50:49.969361    4258 logs.go:276] 4 containers: [f8461628049b 57160aac0930 f70b6cc04e8a dc33544dead5]
	I0307 14:50:49.969436    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 14:50:49.981921    4258 logs.go:276] 1 containers: [3e6a82d1b0ae]
	I0307 14:50:49.982004    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 14:50:49.993573    4258 logs.go:276] 1 containers: [55eb07f7730f]
	I0307 14:50:49.993653    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 14:50:50.004458    4258 logs.go:276] 1 containers: [b976ce3425c3]
	I0307 14:50:50.004522    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 14:50:50.019843    4258 logs.go:276] 0 containers: []
	W0307 14:50:50.019855    4258 logs.go:278] No container was found matching "kindnet"
	I0307 14:50:50.019929    4258 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 14:50:50.031057    4258 logs.go:276] 1 containers: [9be093d0f8c0]
	I0307 14:50:50.031077    4258 logs.go:123] Gathering logs for kube-controller-manager [b976ce3425c3] ...
	I0307 14:50:50.031082    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b976ce3425c3"
	I0307 14:50:50.051284    4258 logs.go:123] Gathering logs for kube-scheduler [3e6a82d1b0ae] ...
	I0307 14:50:50.051297    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6a82d1b0ae"
	I0307 14:50:50.069747    4258 logs.go:123] Gathering logs for storage-provisioner [9be093d0f8c0] ...
	I0307 14:50:50.069760    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be093d0f8c0"
	I0307 14:50:50.082384    4258 logs.go:123] Gathering logs for container status ...
	I0307 14:50:50.082396    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 14:50:50.094690    4258 logs.go:123] Gathering logs for kubelet ...
	I0307 14:50:50.094703    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 14:50:50.131042    4258 logs.go:123] Gathering logs for dmesg ...
	I0307 14:50:50.131062    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 14:50:50.135821    4258 logs.go:123] Gathering logs for kube-apiserver [643ebcb8d1b5] ...
	I0307 14:50:50.135832    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 643ebcb8d1b5"
	I0307 14:50:50.151189    4258 logs.go:123] Gathering logs for coredns [f8461628049b] ...
	I0307 14:50:50.151204    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8461628049b"
	I0307 14:50:50.163877    4258 logs.go:123] Gathering logs for coredns [dc33544dead5] ...
	I0307 14:50:50.163889    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc33544dead5"
	I0307 14:50:50.176779    4258 logs.go:123] Gathering logs for etcd [1d4ffc33c618] ...
	I0307 14:50:50.176790    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d4ffc33c618"
	I0307 14:50:50.192247    4258 logs.go:123] Gathering logs for coredns [57160aac0930] ...
	I0307 14:50:50.192261    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57160aac0930"
	I0307 14:50:50.205361    4258 logs.go:123] Gathering logs for coredns [f70b6cc04e8a] ...
	I0307 14:50:50.205372    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70b6cc04e8a"
	I0307 14:50:50.218827    4258 logs.go:123] Gathering logs for Docker ...
	I0307 14:50:50.218835    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 14:50:50.242172    4258 logs.go:123] Gathering logs for describe nodes ...
	I0307 14:50:50.242189    4258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 14:50:50.280498    4258 logs.go:123] Gathering logs for kube-proxy [55eb07f7730f] ...
	I0307 14:50:50.280509    4258 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55eb07f7730f"
	I0307 14:50:52.797146    4258 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 14:50:57.801180    4258 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 14:50:57.809266    4258 out.go:177] 
	W0307 14:50:57.814329    4258 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0307 14:50:57.814361    4258 out.go:239] * 
	* 
	W0307 14:50:57.816773    4258 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:50:57.832155    4258 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-491000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (579.24s)

                                                
                                    
x
+
TestPause/serial/Start (9.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-839000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-839000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.855045375s)

                                                
                                                
-- stdout --
	* [pause-839000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-839000" primary control-plane node in "pause-839000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-839000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-839000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-839000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-839000 -n pause-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-839000 -n pause-839000: exit status 7 (68.974292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-839000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-902000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-902000 --driver=qemu2 : exit status 80 (9.754374833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-902000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-902000" primary control-plane node in "NoKubernetes-902000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-902000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-902000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-902000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000: exit status 7 (61.302958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-902000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --driver=qemu2 : exit status 80 (5.844871417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-902000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-902000
	* Restarting existing qemu2 VM for "NoKubernetes-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-902000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000: exit status 7 (69.428833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-902000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --driver=qemu2 : exit status 80 (5.834874541s)

                                                
                                                
-- stdout --
	* [NoKubernetes-902000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-902000
	* Restarting existing qemu2 VM for "NoKubernetes-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-902000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000: exit status 7 (33.715875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-902000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-902000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-902000 --driver=qemu2 : exit status 80 (6.338446041s)

                                                
                                                
-- stdout --
	* [NoKubernetes-902000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-902000
	* Restarting existing qemu2 VM for "NoKubernetes-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-902000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-902000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000: exit status 7 (62.669167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-902000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (6.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.902064375s)

                                                
                                                
-- stdout --
	* [auto-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-581000" primary control-plane node in "auto-581000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-581000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:49:39.130408    4625 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:49:39.130557    4625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:49:39.130560    4625 out.go:304] Setting ErrFile to fd 2...
	I0307 14:49:39.130563    4625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:49:39.130706    4625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:49:39.132027    4625 out.go:298] Setting JSON to false
	I0307 14:49:39.149819    4625 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4736,"bootTime":1709847043,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:49:39.149885    4625 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:49:39.155398    4625 out.go:177] * [auto-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:49:39.162258    4625 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:49:39.162300    4625 notify.go:220] Checking for updates...
	I0307 14:49:39.170406    4625 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:49:39.173367    4625 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:49:39.176374    4625 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:49:39.179412    4625 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:49:39.180615    4625 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:49:39.183761    4625 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:49:39.183825    4625 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:49:39.183870    4625 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:49:39.188406    4625 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:49:39.194347    4625 start.go:297] selected driver: qemu2
	I0307 14:49:39.194352    4625 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:49:39.194358    4625 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:49:39.196845    4625 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:49:39.199357    4625 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:49:39.202552    4625 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:49:39.202591    4625 cni.go:84] Creating CNI manager for ""
	I0307 14:49:39.202598    4625 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:49:39.202602    4625 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 14:49:39.202635    4625 start.go:340] cluster config:
	{Name:auto-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:49:39.207629    4625 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:49:39.215367    4625 out.go:177] * Starting "auto-581000" primary control-plane node in "auto-581000" cluster
	I0307 14:49:39.219324    4625 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:49:39.219356    4625 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:49:39.219369    4625 cache.go:56] Caching tarball of preloaded images
	I0307 14:49:39.219450    4625 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:49:39.219458    4625 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:49:39.219525    4625 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/auto-581000/config.json ...
	I0307 14:49:39.219538    4625 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/auto-581000/config.json: {Name:mkb578fee4ac73f87285473157fec5176f79a627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:49:39.219920    4625 start.go:360] acquireMachinesLock for auto-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:49:39.219951    4625 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "auto-581000"
	I0307 14:49:39.219961    4625 start.go:93] Provisioning new machine with config: &{Name:auto-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:49:39.219993    4625 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:49:39.224400    4625 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:49:39.240006    4625 start.go:159] libmachine.API.Create for "auto-581000" (driver="qemu2")
	I0307 14:49:39.240035    4625 client.go:168] LocalClient.Create starting
	I0307 14:49:39.240113    4625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:49:39.240142    4625 main.go:141] libmachine: Decoding PEM data...
	I0307 14:49:39.240151    4625 main.go:141] libmachine: Parsing certificate...
	I0307 14:49:39.240200    4625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:49:39.240222    4625 main.go:141] libmachine: Decoding PEM data...
	I0307 14:49:39.240227    4625 main.go:141] libmachine: Parsing certificate...
	I0307 14:49:39.240667    4625 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:49:39.386002    4625 main.go:141] libmachine: Creating SSH key...
	I0307 14:49:39.535006    4625 main.go:141] libmachine: Creating Disk image...
	I0307 14:49:39.535016    4625 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:49:39.535202    4625 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/disk.qcow2
	I0307 14:49:39.548289    4625 main.go:141] libmachine: STDOUT: 
	I0307 14:49:39.548314    4625 main.go:141] libmachine: STDERR: 
	I0307 14:49:39.548364    4625 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/disk.qcow2 +20000M
	I0307 14:49:39.559839    4625 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:49:39.559856    4625 main.go:141] libmachine: STDERR: 
	I0307 14:49:39.559869    4625 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/disk.qcow2
	I0307 14:49:39.559873    4625 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:49:39.559898    4625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:15:b3:0e:82:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/disk.qcow2
	I0307 14:49:39.561667    4625 main.go:141] libmachine: STDOUT: 
	I0307 14:49:39.561683    4625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:49:39.561708    4625 client.go:171] duration metric: took 321.664625ms to LocalClient.Create
	I0307 14:49:41.563998    4625 start.go:128] duration metric: took 2.343970417s to createHost
	I0307 14:49:41.564099    4625 start.go:83] releasing machines lock for "auto-581000", held for 2.344133417s
	W0307 14:49:41.564199    4625 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:49:41.575237    4625 out.go:177] * Deleting "auto-581000" in qemu2 ...
	W0307 14:49:41.608623    4625 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:49:41.608696    4625 start.go:728] Will try again in 5 seconds ...
	I0307 14:49:46.610972    4625 start.go:360] acquireMachinesLock for auto-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:49:46.611458    4625 start.go:364] duration metric: took 374.709µs to acquireMachinesLock for "auto-581000"
	I0307 14:49:46.611600    4625 start.go:93] Provisioning new machine with config: &{Name:auto-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:49:46.611901    4625 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:49:46.621379    4625 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:49:46.669419    4625 start.go:159] libmachine.API.Create for "auto-581000" (driver="qemu2")
	I0307 14:49:46.669467    4625 client.go:168] LocalClient.Create starting
	I0307 14:49:46.669610    4625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:49:46.669670    4625 main.go:141] libmachine: Decoding PEM data...
	I0307 14:49:46.669689    4625 main.go:141] libmachine: Parsing certificate...
	I0307 14:49:46.669759    4625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:49:46.669801    4625 main.go:141] libmachine: Decoding PEM data...
	I0307 14:49:46.669822    4625 main.go:141] libmachine: Parsing certificate...
	I0307 14:49:46.670594    4625 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:49:46.823565    4625 main.go:141] libmachine: Creating SSH key...
	I0307 14:49:46.925255    4625 main.go:141] libmachine: Creating Disk image...
	I0307 14:49:46.925264    4625 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:49:46.925481    4625 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/disk.qcow2
	I0307 14:49:46.938972    4625 main.go:141] libmachine: STDOUT: 
	I0307 14:49:46.938997    4625 main.go:141] libmachine: STDERR: 
	I0307 14:49:46.939066    4625 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/disk.qcow2 +20000M
	I0307 14:49:46.951271    4625 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:49:46.951297    4625 main.go:141] libmachine: STDERR: 
	I0307 14:49:46.951310    4625 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/disk.qcow2
	I0307 14:49:46.951314    4625 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:49:46.951352    4625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:58:1b:1e:6b:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/auto-581000/disk.qcow2
	I0307 14:49:46.953540    4625 main.go:141] libmachine: STDOUT: 
	I0307 14:49:46.953561    4625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:49:46.953578    4625 client.go:171] duration metric: took 284.1045ms to LocalClient.Create
	I0307 14:49:48.955767    4625 start.go:128] duration metric: took 2.343828875s to createHost
	I0307 14:49:48.955852    4625 start.go:83] releasing machines lock for "auto-581000", held for 2.344364666s
	W0307 14:49:48.956259    4625 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:49:48.965016    4625 out.go:177] 
	W0307 14:49:48.973065    4625 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:49:48.973112    4625 out.go:239] * 
	* 
	W0307 14:49:48.976133    4625 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:49:48.984941    4625 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.923634875s)

                                                
                                                
-- stdout --
	* [kindnet-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-581000" primary control-plane node in "kindnet-581000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-581000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:49:51.257043    4748 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:49:51.257162    4748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:49:51.257166    4748 out.go:304] Setting ErrFile to fd 2...
	I0307 14:49:51.257168    4748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:49:51.257297    4748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:49:51.258409    4748 out.go:298] Setting JSON to false
	I0307 14:49:51.275247    4748 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4748,"bootTime":1709847043,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:49:51.275324    4748 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:49:51.282590    4748 out.go:177] * [kindnet-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:49:51.290499    4748 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:49:51.294527    4748 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:49:51.290528    4748 notify.go:220] Checking for updates...
	I0307 14:49:51.300492    4748 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:49:51.303518    4748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:49:51.306495    4748 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:49:51.313514    4748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:49:51.316837    4748 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:49:51.316901    4748 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:49:51.316943    4748 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:49:51.320512    4748 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:49:51.327416    4748 start.go:297] selected driver: qemu2
	I0307 14:49:51.327421    4748 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:49:51.327426    4748 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:49:51.329538    4748 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:49:51.333572    4748 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:49:51.337595    4748 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:49:51.337657    4748 cni.go:84] Creating CNI manager for "kindnet"
	I0307 14:49:51.337661    4748 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 14:49:51.337691    4748 start.go:340] cluster config:
	{Name:kindnet-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:49:51.341784    4748 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:49:51.350549    4748 out.go:177] * Starting "kindnet-581000" primary control-plane node in "kindnet-581000" cluster
	I0307 14:49:51.354545    4748 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:49:51.354559    4748 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:49:51.354572    4748 cache.go:56] Caching tarball of preloaded images
	I0307 14:49:51.354638    4748 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:49:51.354643    4748 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:49:51.354729    4748 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/kindnet-581000/config.json ...
	I0307 14:49:51.354741    4748 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/kindnet-581000/config.json: {Name:mked13149c0b66e1ded981890a0e3f0cba9c2674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:49:51.354959    4748 start.go:360] acquireMachinesLock for kindnet-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:49:51.354988    4748 start.go:364] duration metric: took 24.334µs to acquireMachinesLock for "kindnet-581000"
	I0307 14:49:51.355002    4748 start.go:93] Provisioning new machine with config: &{Name:kindnet-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:49:51.355026    4748 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:49:51.363430    4748 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:49:51.378798    4748 start.go:159] libmachine.API.Create for "kindnet-581000" (driver="qemu2")
	I0307 14:49:51.378825    4748 client.go:168] LocalClient.Create starting
	I0307 14:49:51.378889    4748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:49:51.378916    4748 main.go:141] libmachine: Decoding PEM data...
	I0307 14:49:51.378927    4748 main.go:141] libmachine: Parsing certificate...
	I0307 14:49:51.378972    4748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:49:51.378993    4748 main.go:141] libmachine: Decoding PEM data...
	I0307 14:49:51.378999    4748 main.go:141] libmachine: Parsing certificate...
	I0307 14:49:51.379366    4748 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:49:51.522011    4748 main.go:141] libmachine: Creating SSH key...
	I0307 14:49:51.678858    4748 main.go:141] libmachine: Creating Disk image...
	I0307 14:49:51.678870    4748 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:49:51.679074    4748 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/disk.qcow2
	I0307 14:49:51.691983    4748 main.go:141] libmachine: STDOUT: 
	I0307 14:49:51.692002    4748 main.go:141] libmachine: STDERR: 
	I0307 14:49:51.692062    4748 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/disk.qcow2 +20000M
	I0307 14:49:51.702929    4748 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:49:51.702954    4748 main.go:141] libmachine: STDERR: 
	I0307 14:49:51.702975    4748 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/disk.qcow2
	I0307 14:49:51.702980    4748 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:49:51.703015    4748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:ea:d2:9a:8c:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/disk.qcow2
	I0307 14:49:51.704821    4748 main.go:141] libmachine: STDOUT: 
	I0307 14:49:51.704838    4748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:49:51.704858    4748 client.go:171] duration metric: took 326.025292ms to LocalClient.Create
	I0307 14:49:53.705672    4748 start.go:128] duration metric: took 2.350587542s to createHost
	I0307 14:49:53.705787    4748 start.go:83] releasing machines lock for "kindnet-581000", held for 2.35078425s
	W0307 14:49:53.705859    4748 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:49:53.719849    4748 out.go:177] * Deleting "kindnet-581000" in qemu2 ...
	W0307 14:49:53.756833    4748 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:49:53.756877    4748 start.go:728] Will try again in 5 seconds ...
	I0307 14:49:58.758334    4748 start.go:360] acquireMachinesLock for kindnet-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:49:58.758901    4748 start.go:364] duration metric: took 420.333µs to acquireMachinesLock for "kindnet-581000"
	I0307 14:49:58.759101    4748 start.go:93] Provisioning new machine with config: &{Name:kindnet-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:49:58.759396    4748 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:49:58.769068    4748 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:49:58.818986    4748 start.go:159] libmachine.API.Create for "kindnet-581000" (driver="qemu2")
	I0307 14:49:58.819037    4748 client.go:168] LocalClient.Create starting
	I0307 14:49:58.819152    4748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:49:58.819221    4748 main.go:141] libmachine: Decoding PEM data...
	I0307 14:49:58.819235    4748 main.go:141] libmachine: Parsing certificate...
	I0307 14:49:58.819302    4748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:49:58.819347    4748 main.go:141] libmachine: Decoding PEM data...
	I0307 14:49:58.819358    4748 main.go:141] libmachine: Parsing certificate...
	I0307 14:49:58.819929    4748 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:49:58.976106    4748 main.go:141] libmachine: Creating SSH key...
	I0307 14:49:59.073410    4748 main.go:141] libmachine: Creating Disk image...
	I0307 14:49:59.073419    4748 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:49:59.073615    4748 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/disk.qcow2
	I0307 14:49:59.086317    4748 main.go:141] libmachine: STDOUT: 
	I0307 14:49:59.086338    4748 main.go:141] libmachine: STDERR: 
	I0307 14:49:59.086395    4748 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/disk.qcow2 +20000M
	I0307 14:49:59.097232    4748 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:49:59.097251    4748 main.go:141] libmachine: STDERR: 
	I0307 14:49:59.097263    4748 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/disk.qcow2
	I0307 14:49:59.097269    4748 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:49:59.097298    4748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:9f:55:58:c8:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kindnet-581000/disk.qcow2
	I0307 14:49:59.099124    4748 main.go:141] libmachine: STDOUT: 
	I0307 14:49:59.099140    4748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:49:59.099155    4748 client.go:171] duration metric: took 280.113208ms to LocalClient.Create
	I0307 14:50:01.101377    4748 start.go:128] duration metric: took 2.341936416s to createHost
	I0307 14:50:01.101456    4748 start.go:83] releasing machines lock for "kindnet-581000", held for 2.342500083s
	W0307 14:50:01.101749    4748 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:01.116474    4748 out.go:177] 
	W0307 14:50:01.120444    4748 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:50:01.120480    4748 out.go:239] * 
	* 
	W0307 14:50:01.123148    4748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:50:01.136482    4748 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.911544167s)

                                                
                                                
-- stdout --
	* [calico-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-581000" primary control-plane node in "calico-581000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-581000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:50:03.506742    4874 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:50:03.506879    4874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:50:03.506883    4874 out.go:304] Setting ErrFile to fd 2...
	I0307 14:50:03.506885    4874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:50:03.507011    4874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:50:03.508100    4874 out.go:298] Setting JSON to false
	I0307 14:50:03.524480    4874 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4760,"bootTime":1709847043,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:50:03.524543    4874 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:50:03.530519    4874 out.go:177] * [calico-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:50:03.537308    4874 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:50:03.541401    4874 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:50:03.537375    4874 notify.go:220] Checking for updates...
	I0307 14:50:03.544376    4874 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:50:03.547320    4874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:50:03.550329    4874 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:50:03.553266    4874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:50:03.556702    4874 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:50:03.556765    4874 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:50:03.556813    4874 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:50:03.561341    4874 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:50:03.568339    4874 start.go:297] selected driver: qemu2
	I0307 14:50:03.568344    4874 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:50:03.568348    4874 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:50:03.570652    4874 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:50:03.574406    4874 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:50:03.577330    4874 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:50:03.577361    4874 cni.go:84] Creating CNI manager for "calico"
	I0307 14:50:03.577364    4874 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0307 14:50:03.577392    4874 start.go:340] cluster config:
	{Name:calico-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:50:03.581961    4874 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:50:03.588233    4874 out.go:177] * Starting "calico-581000" primary control-plane node in "calico-581000" cluster
	I0307 14:50:03.592319    4874 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:50:03.592336    4874 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:50:03.592346    4874 cache.go:56] Caching tarball of preloaded images
	I0307 14:50:03.592412    4874 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:50:03.592418    4874 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:50:03.592484    4874 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/calico-581000/config.json ...
	I0307 14:50:03.592498    4874 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/calico-581000/config.json: {Name:mke5e21f7ee302e93ebf3b164fb13413e32cc129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:50:03.592738    4874 start.go:360] acquireMachinesLock for calico-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:50:03.592770    4874 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "calico-581000"
	I0307 14:50:03.592780    4874 start.go:93] Provisioning new machine with config: &{Name:calico-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:50:03.592812    4874 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:50:03.601265    4874 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:50:03.617669    4874 start.go:159] libmachine.API.Create for "calico-581000" (driver="qemu2")
	I0307 14:50:03.617695    4874 client.go:168] LocalClient.Create starting
	I0307 14:50:03.617755    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:50:03.617783    4874 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:03.617795    4874 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:03.617841    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:50:03.617865    4874 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:03.617875    4874 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:03.618242    4874 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:50:03.783353    4874 main.go:141] libmachine: Creating SSH key...
	I0307 14:50:03.930920    4874 main.go:141] libmachine: Creating Disk image...
	I0307 14:50:03.930936    4874 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:50:03.931129    4874 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/disk.qcow2
	I0307 14:50:03.944108    4874 main.go:141] libmachine: STDOUT: 
	I0307 14:50:03.944137    4874 main.go:141] libmachine: STDERR: 
	I0307 14:50:03.944203    4874 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/disk.qcow2 +20000M
	I0307 14:50:03.955405    4874 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:50:03.955425    4874 main.go:141] libmachine: STDERR: 
	I0307 14:50:03.955447    4874 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/disk.qcow2
	I0307 14:50:03.955454    4874 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:50:03.955483    4874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:18:5e:35:c4:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/disk.qcow2
	I0307 14:50:03.957271    4874 main.go:141] libmachine: STDOUT: 
	I0307 14:50:03.957284    4874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:50:03.957310    4874 client.go:171] duration metric: took 339.608625ms to LocalClient.Create
	I0307 14:50:05.958648    4874 start.go:128] duration metric: took 2.365783083s to createHost
	I0307 14:50:05.958752    4874 start.go:83] releasing machines lock for "calico-581000", held for 2.365967125s
	W0307 14:50:05.958808    4874 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:05.973957    4874 out.go:177] * Deleting "calico-581000" in qemu2 ...
	W0307 14:50:06.000419    4874 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:06.000507    4874 start.go:728] Will try again in 5 seconds ...
	I0307 14:50:11.002536    4874 start.go:360] acquireMachinesLock for calico-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:50:11.002666    4874 start.go:364] duration metric: took 101.167µs to acquireMachinesLock for "calico-581000"
	I0307 14:50:11.002693    4874 start.go:93] Provisioning new machine with config: &{Name:calico-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:50:11.002754    4874 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:50:11.009964    4874 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:50:11.027460    4874 start.go:159] libmachine.API.Create for "calico-581000" (driver="qemu2")
	I0307 14:50:11.027494    4874 client.go:168] LocalClient.Create starting
	I0307 14:50:11.027555    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:50:11.027594    4874 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:11.027605    4874 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:11.027643    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:50:11.027671    4874 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:11.027679    4874 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:11.028044    4874 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:50:11.171335    4874 main.go:141] libmachine: Creating SSH key...
	I0307 14:50:11.312281    4874 main.go:141] libmachine: Creating Disk image...
	I0307 14:50:11.312294    4874 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:50:11.312490    4874 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/disk.qcow2
	I0307 14:50:11.325096    4874 main.go:141] libmachine: STDOUT: 
	I0307 14:50:11.325120    4874 main.go:141] libmachine: STDERR: 
	I0307 14:50:11.325190    4874 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/disk.qcow2 +20000M
	I0307 14:50:11.336316    4874 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:50:11.336338    4874 main.go:141] libmachine: STDERR: 
	I0307 14:50:11.336353    4874 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/disk.qcow2
	I0307 14:50:11.336360    4874 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:50:11.336401    4874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:5b:97:21:ab:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/calico-581000/disk.qcow2
	I0307 14:50:11.338215    4874 main.go:141] libmachine: STDOUT: 
	I0307 14:50:11.338231    4874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:50:11.338246    4874 client.go:171] duration metric: took 310.746208ms to LocalClient.Create
	I0307 14:50:13.340452    4874 start.go:128] duration metric: took 2.337661959s to createHost
	I0307 14:50:13.340555    4874 start.go:83] releasing machines lock for "calico-581000", held for 2.337873166s
	W0307 14:50:13.340941    4874 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:13.351692    4874 out.go:177] 
	W0307 14:50:13.360714    4874 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:50:13.360769    4874 out.go:239] * 
	* 
	W0307 14:50:13.363403    4874 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:50:13.374544    4874 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.760196958s)

                                                
                                                
-- stdout --
	* [custom-flannel-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-581000" primary control-plane node in "custom-flannel-581000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-581000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:50:15.893501    4998 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:50:15.893613    4998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:50:15.893616    4998 out.go:304] Setting ErrFile to fd 2...
	I0307 14:50:15.893619    4998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:50:15.893755    4998 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:50:15.894821    4998 out.go:298] Setting JSON to false
	I0307 14:50:15.911278    4998 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4772,"bootTime":1709847043,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:50:15.911343    4998 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:50:15.917200    4998 out.go:177] * [custom-flannel-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:50:15.923202    4998 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:50:15.923271    4998 notify.go:220] Checking for updates...
	I0307 14:50:15.928138    4998 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:50:15.931112    4998 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:50:15.934118    4998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:50:15.937114    4998 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:50:15.938710    4998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:50:15.942508    4998 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:50:15.942570    4998 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:50:15.942637    4998 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:50:15.947169    4998 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:50:15.953090    4998 start.go:297] selected driver: qemu2
	I0307 14:50:15.953096    4998 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:50:15.953105    4998 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:50:15.955316    4998 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:50:15.958162    4998 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:50:15.961267    4998 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:50:15.961319    4998 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0307 14:50:15.961518    4998 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0307 14:50:15.961555    4998 start.go:340] cluster config:
	{Name:custom-flannel-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:50:15.965613    4998 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:50:15.973112    4998 out.go:177] * Starting "custom-flannel-581000" primary control-plane node in "custom-flannel-581000" cluster
	I0307 14:50:15.977128    4998 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:50:15.977142    4998 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:50:15.977156    4998 cache.go:56] Caching tarball of preloaded images
	I0307 14:50:15.977212    4998 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:50:15.977219    4998 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:50:15.977295    4998 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/custom-flannel-581000/config.json ...
	I0307 14:50:15.977305    4998 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/custom-flannel-581000/config.json: {Name:mk8229b6e6da5e61828a632f2d9c0d05ba9c1b9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:50:15.977624    4998 start.go:360] acquireMachinesLock for custom-flannel-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:50:15.977658    4998 start.go:364] duration metric: took 23.75µs to acquireMachinesLock for "custom-flannel-581000"
	I0307 14:50:15.977667    4998 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:50:15.977692    4998 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:50:15.982189    4998 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:50:15.997300    4998 start.go:159] libmachine.API.Create for "custom-flannel-581000" (driver="qemu2")
	I0307 14:50:15.997324    4998 client.go:168] LocalClient.Create starting
	I0307 14:50:15.997380    4998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:50:15.997407    4998 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:15.997414    4998 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:15.997456    4998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:50:15.997477    4998 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:15.997484    4998 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:15.997821    4998 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:50:16.139311    4998 main.go:141] libmachine: Creating SSH key...
	I0307 14:50:16.194227    4998 main.go:141] libmachine: Creating Disk image...
	I0307 14:50:16.194244    4998 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:50:16.194438    4998 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/disk.qcow2
	I0307 14:50:16.206423    4998 main.go:141] libmachine: STDOUT: 
	I0307 14:50:16.206447    4998 main.go:141] libmachine: STDERR: 
	I0307 14:50:16.206504    4998 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/disk.qcow2 +20000M
	I0307 14:50:16.217367    4998 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:50:16.217389    4998 main.go:141] libmachine: STDERR: 
	I0307 14:50:16.217406    4998 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/disk.qcow2
	I0307 14:50:16.217418    4998 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:50:16.217449    4998 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:1a:56:f1:ba:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/disk.qcow2
	I0307 14:50:16.219212    4998 main.go:141] libmachine: STDOUT: 
	I0307 14:50:16.219228    4998 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:50:16.219249    4998 client.go:171] duration metric: took 221.915083ms to LocalClient.Create
	I0307 14:50:18.221505    4998 start.go:128] duration metric: took 2.243776625s to createHost
	I0307 14:50:18.221621    4998 start.go:83] releasing machines lock for "custom-flannel-581000", held for 2.243949417s
	W0307 14:50:18.221677    4998 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:18.228649    4998 out.go:177] * Deleting "custom-flannel-581000" in qemu2 ...
	W0307 14:50:18.266361    4998 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:18.266404    4998 start.go:728] Will try again in 5 seconds ...
	I0307 14:50:23.268008    4998 start.go:360] acquireMachinesLock for custom-flannel-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:50:23.268373    4998 start.go:364] duration metric: took 275.834µs to acquireMachinesLock for "custom-flannel-581000"
	I0307 14:50:23.268499    4998 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:50:23.268655    4998 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:50:23.273397    4998 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:50:23.306574    4998 start.go:159] libmachine.API.Create for "custom-flannel-581000" (driver="qemu2")
	I0307 14:50:23.306639    4998 client.go:168] LocalClient.Create starting
	I0307 14:50:23.306735    4998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:50:23.306793    4998 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:23.306805    4998 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:23.306864    4998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:50:23.306913    4998 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:23.306925    4998 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:23.307403    4998 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:50:23.454624    4998 main.go:141] libmachine: Creating SSH key...
	I0307 14:50:23.551467    4998 main.go:141] libmachine: Creating Disk image...
	I0307 14:50:23.551481    4998 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:50:23.551662    4998 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/disk.qcow2
	I0307 14:50:23.564272    4998 main.go:141] libmachine: STDOUT: 
	I0307 14:50:23.564294    4998 main.go:141] libmachine: STDERR: 
	I0307 14:50:23.564358    4998 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/disk.qcow2 +20000M
	I0307 14:50:23.575040    4998 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:50:23.575057    4998 main.go:141] libmachine: STDERR: 
	I0307 14:50:23.575069    4998 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/disk.qcow2
	I0307 14:50:23.575074    4998 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:50:23.575120    4998 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:6d:fe:ce:dd:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/custom-flannel-581000/disk.qcow2
	I0307 14:50:23.576887    4998 main.go:141] libmachine: STDOUT: 
	I0307 14:50:23.576903    4998 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:50:23.576916    4998 client.go:171] duration metric: took 270.272125ms to LocalClient.Create
	I0307 14:50:25.579093    4998 start.go:128] duration metric: took 2.310403084s to createHost
	I0307 14:50:25.579171    4998 start.go:83] releasing machines lock for "custom-flannel-581000", held for 2.310777209s
	W0307 14:50:25.579459    4998 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:25.589187    4998 out.go:177] 
	W0307 14:50:25.597290    4998 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:50:25.597317    4998 out.go:239] * 
	* 
	W0307 14:50:25.599820    4998 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:50:25.609169    4998 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.717443667s)

                                                
                                                
-- stdout --
	* [false-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-581000" primary control-plane node in "false-581000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-581000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:50:28.140109    5123 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:50:28.140222    5123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:50:28.140225    5123 out.go:304] Setting ErrFile to fd 2...
	I0307 14:50:28.140227    5123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:50:28.140339    5123 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:50:28.141525    5123 out.go:298] Setting JSON to false
	I0307 14:50:28.157940    5123 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4785,"bootTime":1709847043,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:50:28.158002    5123 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:50:28.161742    5123 out.go:177] * [false-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:50:28.169723    5123 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:50:28.173723    5123 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:50:28.169784    5123 notify.go:220] Checking for updates...
	I0307 14:50:28.176683    5123 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:50:28.179676    5123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:50:28.181260    5123 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:50:28.184666    5123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:50:28.187959    5123 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:50:28.188045    5123 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:50:28.188107    5123 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:50:28.192599    5123 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:50:28.199664    5123 start.go:297] selected driver: qemu2
	I0307 14:50:28.199669    5123 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:50:28.199675    5123 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:50:28.201993    5123 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:50:28.203641    5123 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:50:28.206729    5123 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:50:28.206762    5123 cni.go:84] Creating CNI manager for "false"
	I0307 14:50:28.206795    5123 start.go:340] cluster config:
	{Name:false-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:50:28.211521    5123 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:50:28.218692    5123 out.go:177] * Starting "false-581000" primary control-plane node in "false-581000" cluster
	I0307 14:50:28.222656    5123 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:50:28.222670    5123 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:50:28.222678    5123 cache.go:56] Caching tarball of preloaded images
	I0307 14:50:28.222734    5123 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:50:28.222740    5123 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:50:28.222801    5123 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/false-581000/config.json ...
	I0307 14:50:28.222812    5123 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/false-581000/config.json: {Name:mk2f635fe56a1dd895bea0e6041594d31aede5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:50:28.223150    5123 start.go:360] acquireMachinesLock for false-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:50:28.223180    5123 start.go:364] duration metric: took 24.416µs to acquireMachinesLock for "false-581000"
	I0307 14:50:28.223193    5123 start.go:93] Provisioning new machine with config: &{Name:false-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:50:28.223227    5123 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:50:28.230681    5123 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:50:28.245857    5123 start.go:159] libmachine.API.Create for "false-581000" (driver="qemu2")
	I0307 14:50:28.245878    5123 client.go:168] LocalClient.Create starting
	I0307 14:50:28.245934    5123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:50:28.245961    5123 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:28.245969    5123 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:28.246020    5123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:50:28.246041    5123 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:28.246047    5123 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:28.246392    5123 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:50:28.388823    5123 main.go:141] libmachine: Creating SSH key...
	I0307 14:50:28.455742    5123 main.go:141] libmachine: Creating Disk image...
	I0307 14:50:28.455748    5123 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:50:28.455961    5123 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/disk.qcow2
	I0307 14:50:28.468221    5123 main.go:141] libmachine: STDOUT: 
	I0307 14:50:28.468251    5123 main.go:141] libmachine: STDERR: 
	I0307 14:50:28.468303    5123 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/disk.qcow2 +20000M
	I0307 14:50:28.479799    5123 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:50:28.479817    5123 main.go:141] libmachine: STDERR: 
	I0307 14:50:28.479834    5123 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/disk.qcow2
	I0307 14:50:28.479837    5123 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:50:28.479865    5123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:53:3d:15:60:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/disk.qcow2
	I0307 14:50:28.481467    5123 main.go:141] libmachine: STDOUT: 
	I0307 14:50:28.481483    5123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:50:28.481507    5123 client.go:171] duration metric: took 235.623083ms to LocalClient.Create
	I0307 14:50:30.483699    5123 start.go:128] duration metric: took 2.258821416s to createHost
	I0307 14:50:30.483768    5123 start.go:83] releasing machines lock for "false-581000", held for 2.258962125s
	W0307 14:50:30.483828    5123 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:30.494757    5123 out.go:177] * Deleting "false-581000" in qemu2 ...
	W0307 14:50:30.525276    5123 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:30.525313    5123 start.go:728] Will try again in 5 seconds ...
	I0307 14:50:35.534102    5123 start.go:360] acquireMachinesLock for false-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:50:35.534334    5123 start.go:364] duration metric: took 190.916µs to acquireMachinesLock for "false-581000"
	I0307 14:50:35.534365    5123 start.go:93] Provisioning new machine with config: &{Name:false-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:50:35.534460    5123 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:50:35.544873    5123 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:50:35.572908    5123 start.go:159] libmachine.API.Create for "false-581000" (driver="qemu2")
	I0307 14:50:35.572953    5123 client.go:168] LocalClient.Create starting
	I0307 14:50:35.573035    5123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:50:35.573088    5123 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:35.573100    5123 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:35.573159    5123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:50:35.573191    5123 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:35.573202    5123 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:35.573650    5123 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:50:35.720427    5123 main.go:141] libmachine: Creating SSH key...
	I0307 14:50:35.767547    5123 main.go:141] libmachine: Creating Disk image...
	I0307 14:50:35.767553    5123 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:50:35.767736    5123 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/disk.qcow2
	I0307 14:50:35.780151    5123 main.go:141] libmachine: STDOUT: 
	I0307 14:50:35.780179    5123 main.go:141] libmachine: STDERR: 
	I0307 14:50:35.780235    5123 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/disk.qcow2 +20000M
	I0307 14:50:35.791024    5123 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:50:35.791046    5123 main.go:141] libmachine: STDERR: 
	I0307 14:50:35.791062    5123 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/disk.qcow2
	I0307 14:50:35.791067    5123 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:50:35.791102    5123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:e1:3e:47:1f:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/false-581000/disk.qcow2
	I0307 14:50:35.792873    5123 main.go:141] libmachine: STDOUT: 
	I0307 14:50:35.792889    5123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:50:35.792902    5123 client.go:171] duration metric: took 219.702875ms to LocalClient.Create
	I0307 14:50:37.797165    5123 start.go:128] duration metric: took 2.260304792s to createHost
	I0307 14:50:37.797291    5123 start.go:83] releasing machines lock for "false-581000", held for 2.260558792s
	W0307 14:50:37.797634    5123 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:37.807255    5123 out.go:177] 
	W0307 14:50:37.813389    5123 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:50:37.813427    5123 out.go:239] * 
	* 
	W0307 14:50:37.815126    5123 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:50:37.827255    5123 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.730466708s)

                                                
                                                
-- stdout --
	* [enable-default-cni-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-581000" primary control-plane node in "enable-default-cni-581000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-581000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:50:40.150475    5241 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:50:40.150600    5241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:50:40.150603    5241 out.go:304] Setting ErrFile to fd 2...
	I0307 14:50:40.150605    5241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:50:40.150735    5241 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:50:40.151871    5241 out.go:298] Setting JSON to false
	I0307 14:50:40.168659    5241 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4797,"bootTime":1709847043,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:50:40.168719    5241 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:50:40.175137    5241 out.go:177] * [enable-default-cni-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:50:40.183170    5241 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:50:40.183213    5241 notify.go:220] Checking for updates...
	I0307 14:50:40.191089    5241 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:50:40.194088    5241 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:50:40.197132    5241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:50:40.200175    5241 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:50:40.203164    5241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:50:40.206537    5241 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:50:40.206603    5241 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:50:40.206653    5241 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:50:40.211163    5241 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:50:40.218108    5241 start.go:297] selected driver: qemu2
	I0307 14:50:40.218114    5241 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:50:40.218120    5241 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:50:40.220456    5241 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:50:40.224179    5241 out.go:177] * Automatically selected the socket_vmnet network
	E0307 14:50:40.227137    5241 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0307 14:50:40.227149    5241 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:50:40.227192    5241 cni.go:84] Creating CNI manager for "bridge"
	I0307 14:50:40.227196    5241 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 14:50:40.227235    5241 start.go:340] cluster config:
	{Name:enable-default-cni-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:50:40.231835    5241 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:50:40.239200    5241 out.go:177] * Starting "enable-default-cni-581000" primary control-plane node in "enable-default-cni-581000" cluster
	I0307 14:50:40.243192    5241 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:50:40.243209    5241 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:50:40.243218    5241 cache.go:56] Caching tarball of preloaded images
	I0307 14:50:40.243292    5241 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:50:40.243300    5241 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:50:40.243383    5241 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/enable-default-cni-581000/config.json ...
	I0307 14:50:40.243396    5241 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/enable-default-cni-581000/config.json: {Name:mkb46cef7238861b40a70f1f2786ece9fbf25a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:50:40.243732    5241 start.go:360] acquireMachinesLock for enable-default-cni-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:50:40.243765    5241 start.go:364] duration metric: took 23.667µs to acquireMachinesLock for "enable-default-cni-581000"
	I0307 14:50:40.243775    5241 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:50:40.243814    5241 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:50:40.252134    5241 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:50:40.268119    5241 start.go:159] libmachine.API.Create for "enable-default-cni-581000" (driver="qemu2")
	I0307 14:50:40.268152    5241 client.go:168] LocalClient.Create starting
	I0307 14:50:40.268209    5241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:50:40.268244    5241 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:40.268252    5241 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:40.268298    5241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:50:40.268321    5241 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:40.268329    5241 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:40.268730    5241 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:50:40.412697    5241 main.go:141] libmachine: Creating SSH key...
	I0307 14:50:40.451278    5241 main.go:141] libmachine: Creating Disk image...
	I0307 14:50:40.451284    5241 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:50:40.451457    5241 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/disk.qcow2
	I0307 14:50:40.464010    5241 main.go:141] libmachine: STDOUT: 
	I0307 14:50:40.464035    5241 main.go:141] libmachine: STDERR: 
	I0307 14:50:40.464096    5241 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/disk.qcow2 +20000M
	I0307 14:50:40.475243    5241 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:50:40.475263    5241 main.go:141] libmachine: STDERR: 
	I0307 14:50:40.475283    5241 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/disk.qcow2
	I0307 14:50:40.475289    5241 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:50:40.475316    5241 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:36:4a:fd:5e:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/disk.qcow2
	I0307 14:50:40.477019    5241 main.go:141] libmachine: STDOUT: 
	I0307 14:50:40.477035    5241 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:50:40.477063    5241 client.go:171] duration metric: took 208.727125ms to LocalClient.Create
	I0307 14:50:42.480730    5241 start.go:128] duration metric: took 2.2351605s to createHost
	I0307 14:50:42.480745    5241 start.go:83] releasing machines lock for "enable-default-cni-581000", held for 2.23523525s
	W0307 14:50:42.480768    5241 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:42.489269    5241 out.go:177] * Deleting "enable-default-cni-581000" in qemu2 ...
	W0307 14:50:42.501462    5241 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:42.501469    5241 start.go:728] Will try again in 5 seconds ...
	I0307 14:50:47.506826    5241 start.go:360] acquireMachinesLock for enable-default-cni-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:50:47.507321    5241 start.go:364] duration metric: took 376.041µs to acquireMachinesLock for "enable-default-cni-581000"
	I0307 14:50:47.507476    5241 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:50:47.507731    5241 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:50:47.513424    5241 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:50:47.562176    5241 start.go:159] libmachine.API.Create for "enable-default-cni-581000" (driver="qemu2")
	I0307 14:50:47.562240    5241 client.go:168] LocalClient.Create starting
	I0307 14:50:47.562373    5241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:50:47.562433    5241 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:47.562447    5241 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:47.562504    5241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:50:47.562545    5241 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:47.562559    5241 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:47.563124    5241 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:50:47.721730    5241 main.go:141] libmachine: Creating SSH key...
	I0307 14:50:47.784464    5241 main.go:141] libmachine: Creating Disk image...
	I0307 14:50:47.784470    5241 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:50:47.784646    5241 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/disk.qcow2
	I0307 14:50:47.797326    5241 main.go:141] libmachine: STDOUT: 
	I0307 14:50:47.797347    5241 main.go:141] libmachine: STDERR: 
	I0307 14:50:47.797416    5241 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/disk.qcow2 +20000M
	I0307 14:50:47.808806    5241 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:50:47.808825    5241 main.go:141] libmachine: STDERR: 
	I0307 14:50:47.808835    5241 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/disk.qcow2
	I0307 14:50:47.808849    5241 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:50:47.808895    5241 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:66:03:97:85:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/enable-default-cni-581000/disk.qcow2
	I0307 14:50:47.810700    5241 main.go:141] libmachine: STDOUT: 
	I0307 14:50:47.810717    5241 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:50:47.810730    5241 client.go:171] duration metric: took 248.358042ms to LocalClient.Create
	I0307 14:50:49.813894    5241 start.go:128] duration metric: took 2.305001333s to createHost
	I0307 14:50:49.813962    5241 start.go:83] releasing machines lock for "enable-default-cni-581000", held for 2.305493s
	W0307 14:50:49.814394    5241 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:49.824996    5241 out.go:177] 
	W0307 14:50:49.831124    5241 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:50:49.831155    5241 out.go:239] * 
	* 
	W0307 14:50:49.833790    5241 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:50:49.843114    5241 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.920363208s)

                                                
                                                
-- stdout --
	* [flannel-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-581000" primary control-plane node in "flannel-581000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-581000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:50:52.196852    5355 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:50:52.196980    5355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:50:52.196984    5355 out.go:304] Setting ErrFile to fd 2...
	I0307 14:50:52.196986    5355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:50:52.197116    5355 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:50:52.198241    5355 out.go:298] Setting JSON to false
	I0307 14:50:52.215391    5355 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4809,"bootTime":1709847043,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:50:52.215468    5355 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:50:52.221994    5355 out.go:177] * [flannel-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:50:52.229889    5355 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:50:52.234024    5355 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:50:52.229960    5355 notify.go:220] Checking for updates...
	I0307 14:50:52.238593    5355 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:50:52.242040    5355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:50:52.245008    5355 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:50:52.248031    5355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:50:52.251305    5355 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:50:52.251372    5355 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:50:52.251410    5355 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:50:52.256054    5355 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:50:52.262969    5355 start.go:297] selected driver: qemu2
	I0307 14:50:52.262974    5355 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:50:52.262978    5355 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:50:52.265117    5355 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:50:52.269065    5355 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:50:52.272156    5355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:50:52.272213    5355 cni.go:84] Creating CNI manager for "flannel"
	I0307 14:50:52.272218    5355 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0307 14:50:52.272249    5355 start.go:340] cluster config:
	{Name:flannel-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:50:52.276734    5355 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:50:52.284036    5355 out.go:177] * Starting "flannel-581000" primary control-plane node in "flannel-581000" cluster
	I0307 14:50:52.286931    5355 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:50:52.286945    5355 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:50:52.286957    5355 cache.go:56] Caching tarball of preloaded images
	I0307 14:50:52.287019    5355 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:50:52.287024    5355 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:50:52.287084    5355 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/flannel-581000/config.json ...
	I0307 14:50:52.287096    5355 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/flannel-581000/config.json: {Name:mk7c4e8ca862b00cb4977c6f75efa9cb79c473fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:50:52.287444    5355 start.go:360] acquireMachinesLock for flannel-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:50:52.287479    5355 start.go:364] duration metric: took 28.208µs to acquireMachinesLock for "flannel-581000"
	I0307 14:50:52.287490    5355 start.go:93] Provisioning new machine with config: &{Name:flannel-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:50:52.287523    5355 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:50:52.292067    5355 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:50:52.307848    5355 start.go:159] libmachine.API.Create for "flannel-581000" (driver="qemu2")
	I0307 14:50:52.307867    5355 client.go:168] LocalClient.Create starting
	I0307 14:50:52.307938    5355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:50:52.307967    5355 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:52.307977    5355 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:52.308030    5355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:50:52.308052    5355 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:52.308059    5355 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:52.308416    5355 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:50:52.451543    5355 main.go:141] libmachine: Creating SSH key...
	I0307 14:50:52.597810    5355 main.go:141] libmachine: Creating Disk image...
	I0307 14:50:52.597819    5355 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:50:52.598014    5355 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/disk.qcow2
	I0307 14:50:52.610466    5355 main.go:141] libmachine: STDOUT: 
	I0307 14:50:52.610487    5355 main.go:141] libmachine: STDERR: 
	I0307 14:50:52.610540    5355 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/disk.qcow2 +20000M
	I0307 14:50:52.621548    5355 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:50:52.621568    5355 main.go:141] libmachine: STDERR: 
	I0307 14:50:52.621584    5355 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/disk.qcow2
	I0307 14:50:52.621591    5355 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:50:52.621632    5355 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:5b:67:b9:f0:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/disk.qcow2
	I0307 14:50:52.623487    5355 main.go:141] libmachine: STDOUT: 
	I0307 14:50:52.623504    5355 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:50:52.623525    5355 client.go:171] duration metric: took 315.529709ms to LocalClient.Create
	I0307 14:50:54.626352    5355 start.go:128] duration metric: took 2.337969625s to createHost
	I0307 14:50:54.626387    5355 start.go:83] releasing machines lock for "flannel-581000", held for 2.338057041s
	W0307 14:50:54.626407    5355 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:54.635100    5355 out.go:177] * Deleting "flannel-581000" in qemu2 ...
	W0307 14:50:54.649955    5355 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:50:54.649965    5355 start.go:728] Will try again in 5 seconds ...
	I0307 14:50:59.651907    5355 start.go:360] acquireMachinesLock for flannel-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:50:59.652173    5355 start.go:364] duration metric: took 224.083µs to acquireMachinesLock for "flannel-581000"
	I0307 14:50:59.652206    5355 start.go:93] Provisioning new machine with config: &{Name:flannel-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:50:59.652313    5355 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:50:59.662507    5355 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:50:59.690163    5355 start.go:159] libmachine.API.Create for "flannel-581000" (driver="qemu2")
	I0307 14:50:59.690220    5355 client.go:168] LocalClient.Create starting
	I0307 14:50:59.690314    5355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:50:59.690353    5355 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:59.690364    5355 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:59.690418    5355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:50:59.690446    5355 main.go:141] libmachine: Decoding PEM data...
	I0307 14:50:59.690458    5355 main.go:141] libmachine: Parsing certificate...
	I0307 14:50:59.690882    5355 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:50:59.841088    5355 main.go:141] libmachine: Creating SSH key...
	I0307 14:51:00.022618    5355 main.go:141] libmachine: Creating Disk image...
	I0307 14:51:00.022631    5355 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:51:00.023017    5355 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/disk.qcow2
	I0307 14:51:00.035676    5355 main.go:141] libmachine: STDOUT: 
	I0307 14:51:00.035701    5355 main.go:141] libmachine: STDERR: 
	I0307 14:51:00.035771    5355 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/disk.qcow2 +20000M
	I0307 14:51:00.047089    5355 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:51:00.047108    5355 main.go:141] libmachine: STDERR: 
	I0307 14:51:00.047125    5355 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/disk.qcow2
	I0307 14:51:00.047130    5355 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:51:00.047177    5355 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:34:d4:fd:44:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/flannel-581000/disk.qcow2
	I0307 14:51:00.049137    5355 main.go:141] libmachine: STDOUT: 
	I0307 14:51:00.049153    5355 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:00.049166    5355 client.go:171] duration metric: took 358.855625ms to LocalClient.Create
	I0307 14:51:02.050046    5355 start.go:128] duration metric: took 2.397159875s to createHost
	I0307 14:51:02.050098    5355 start.go:83] releasing machines lock for "flannel-581000", held for 2.39736725s
	W0307 14:51:02.050312    5355 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:02.063767    5355 out.go:177] 
	W0307 14:51:02.066705    5355 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:51:02.066718    5355 out.go:239] * 
	* 
	W0307 14:51:02.067913    5355 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:51:02.077720    5355 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.777729042s)

                                                
                                                
-- stdout --
	* [bridge-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-581000" primary control-plane node in "bridge-581000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-581000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:51:04.572593    5485 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:51:04.572729    5485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:04.572732    5485 out.go:304] Setting ErrFile to fd 2...
	I0307 14:51:04.572735    5485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:04.572862    5485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:51:04.573949    5485 out.go:298] Setting JSON to false
	I0307 14:51:04.590505    5485 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4821,"bootTime":1709847043,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:51:04.590578    5485 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:51:04.597253    5485 out.go:177] * [bridge-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:51:04.606218    5485 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:51:04.606302    5485 notify.go:220] Checking for updates...
	I0307 14:51:04.610202    5485 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:51:04.613207    5485 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:51:04.616168    5485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:51:04.619228    5485 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:51:04.622233    5485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:51:04.625587    5485 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:51:04.625654    5485 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:51:04.625707    5485 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:51:04.630211    5485 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:51:04.637258    5485 start.go:297] selected driver: qemu2
	I0307 14:51:04.637265    5485 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:51:04.637272    5485 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:51:04.639635    5485 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:51:04.643202    5485 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:51:04.646255    5485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:51:04.646288    5485 cni.go:84] Creating CNI manager for "bridge"
	I0307 14:51:04.646294    5485 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 14:51:04.646329    5485 start.go:340] cluster config:
	{Name:bridge-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:51:04.650841    5485 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:04.658200    5485 out.go:177] * Starting "bridge-581000" primary control-plane node in "bridge-581000" cluster
	I0307 14:51:04.662240    5485 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:51:04.662254    5485 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:51:04.662262    5485 cache.go:56] Caching tarball of preloaded images
	I0307 14:51:04.662321    5485 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:51:04.662327    5485 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:51:04.662384    5485 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/bridge-581000/config.json ...
	I0307 14:51:04.662395    5485 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/bridge-581000/config.json: {Name:mke788eb49745e6acbaf76d22989df06c2075f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:51:04.662738    5485 start.go:360] acquireMachinesLock for bridge-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:04.662768    5485 start.go:364] duration metric: took 24.417µs to acquireMachinesLock for "bridge-581000"
	I0307 14:51:04.662777    5485 start.go:93] Provisioning new machine with config: &{Name:bridge-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:51:04.662805    5485 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:51:04.667243    5485 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:51:04.681595    5485 start.go:159] libmachine.API.Create for "bridge-581000" (driver="qemu2")
	I0307 14:51:04.681628    5485 client.go:168] LocalClient.Create starting
	I0307 14:51:04.681687    5485 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:51:04.681714    5485 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:04.681725    5485 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:04.681766    5485 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:51:04.681790    5485 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:04.681798    5485 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:04.682216    5485 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:51:04.825453    5485 main.go:141] libmachine: Creating SSH key...
	I0307 14:51:04.881543    5485 main.go:141] libmachine: Creating Disk image...
	I0307 14:51:04.881549    5485 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:51:04.881751    5485 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/disk.qcow2
	I0307 14:51:04.894285    5485 main.go:141] libmachine: STDOUT: 
	I0307 14:51:04.894307    5485 main.go:141] libmachine: STDERR: 
	I0307 14:51:04.894365    5485 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/disk.qcow2 +20000M
	I0307 14:51:04.905341    5485 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:51:04.905361    5485 main.go:141] libmachine: STDERR: 
	I0307 14:51:04.905379    5485 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/disk.qcow2
	I0307 14:51:04.905387    5485 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:51:04.905422    5485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:3b:c0:91:12:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/disk.qcow2
	I0307 14:51:04.907228    5485 main.go:141] libmachine: STDOUT: 
	I0307 14:51:04.907245    5485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:04.907265    5485 client.go:171] duration metric: took 225.590583ms to LocalClient.Create
	I0307 14:51:06.909861    5485 start.go:128] duration metric: took 2.246628459s to createHost
	I0307 14:51:06.909939    5485 start.go:83] releasing machines lock for "bridge-581000", held for 2.246785708s
	W0307 14:51:06.909995    5485 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:06.923547    5485 out.go:177] * Deleting "bridge-581000" in qemu2 ...
	W0307 14:51:06.947800    5485 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:06.947835    5485 start.go:728] Will try again in 5 seconds ...
	I0307 14:51:11.950700    5485 start.go:360] acquireMachinesLock for bridge-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:11.951227    5485 start.go:364] duration metric: took 427.125µs to acquireMachinesLock for "bridge-581000"
	I0307 14:51:11.951376    5485 start.go:93] Provisioning new machine with config: &{Name:bridge-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:51:11.951654    5485 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:51:11.962312    5485 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:51:12.012577    5485 start.go:159] libmachine.API.Create for "bridge-581000" (driver="qemu2")
	I0307 14:51:12.012632    5485 client.go:168] LocalClient.Create starting
	I0307 14:51:12.012774    5485 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:51:12.012829    5485 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:12.012847    5485 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:12.012910    5485 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:51:12.012951    5485 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:12.012962    5485 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:12.013516    5485 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:51:12.166376    5485 main.go:141] libmachine: Creating SSH key...
	I0307 14:51:12.260757    5485 main.go:141] libmachine: Creating Disk image...
	I0307 14:51:12.260764    5485 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:51:12.260980    5485 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/disk.qcow2
	I0307 14:51:12.273695    5485 main.go:141] libmachine: STDOUT: 
	I0307 14:51:12.273718    5485 main.go:141] libmachine: STDERR: 
	I0307 14:51:12.273779    5485 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/disk.qcow2 +20000M
	I0307 14:51:12.285056    5485 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:51:12.285077    5485 main.go:141] libmachine: STDERR: 
	I0307 14:51:12.285088    5485 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/disk.qcow2
	I0307 14:51:12.285107    5485 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:51:12.285146    5485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:65:2d:75:29:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/bridge-581000/disk.qcow2
	I0307 14:51:12.286913    5485 main.go:141] libmachine: STDOUT: 
	I0307 14:51:12.286929    5485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:12.286942    5485 client.go:171] duration metric: took 274.271666ms to LocalClient.Create
	I0307 14:51:14.289326    5485 start.go:128] duration metric: took 2.337363875s to createHost
	I0307 14:51:14.289381    5485 start.go:83] releasing machines lock for "bridge-581000", held for 2.337874834s
	W0307 14:51:14.289695    5485 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:14.296082    5485 out.go:177] 
	W0307 14:51:14.301109    5485 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:51:14.301147    5485 out.go:239] * 
	* 
	W0307 14:51:14.302712    5485 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:51:14.310003    5485 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-581000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.781446875s)

                                                
                                                
-- stdout --
	* [kubenet-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-581000" primary control-plane node in "kubenet-581000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-581000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:51:16.677638    5603 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:51:16.677763    5603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:16.677767    5603 out.go:304] Setting ErrFile to fd 2...
	I0307 14:51:16.677769    5603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:16.677903    5603 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:51:16.679089    5603 out.go:298] Setting JSON to false
	I0307 14:51:16.696015    5603 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4833,"bootTime":1709847043,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:51:16.696087    5603 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:51:16.702078    5603 out.go:177] * [kubenet-581000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:51:16.708961    5603 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:51:16.709034    5603 notify.go:220] Checking for updates...
	I0307 14:51:16.716883    5603 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:51:16.720000    5603 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:51:16.722973    5603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:51:16.725940    5603 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:51:16.728920    5603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:51:16.732290    5603 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:51:16.732359    5603 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:51:16.732405    5603 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:51:16.736884    5603 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:51:16.743962    5603 start.go:297] selected driver: qemu2
	I0307 14:51:16.743968    5603 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:51:16.743973    5603 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:51:16.746349    5603 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:51:16.750003    5603 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:51:16.753037    5603 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:51:16.753071    5603 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0307 14:51:16.753098    5603 start.go:340] cluster config:
	{Name:kubenet-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:51:16.757664    5603 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:16.769000    5603 out.go:177] * Starting "kubenet-581000" primary control-plane node in "kubenet-581000" cluster
	I0307 14:51:16.772908    5603 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:51:16.772923    5603 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:51:16.772933    5603 cache.go:56] Caching tarball of preloaded images
	I0307 14:51:16.772981    5603 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:51:16.772987    5603 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:51:16.773053    5603 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/kubenet-581000/config.json ...
	I0307 14:51:16.773064    5603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/kubenet-581000/config.json: {Name:mk9cf726b3982dca85ebd28c95e93f51e16682ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:51:16.773289    5603 start.go:360] acquireMachinesLock for kubenet-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:16.773321    5603 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "kubenet-581000"
	I0307 14:51:16.773332    5603 start.go:93] Provisioning new machine with config: &{Name:kubenet-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:51:16.773368    5603 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:51:16.781921    5603 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:51:16.799152    5603 start.go:159] libmachine.API.Create for "kubenet-581000" (driver="qemu2")
	I0307 14:51:16.799185    5603 client.go:168] LocalClient.Create starting
	I0307 14:51:16.799259    5603 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:51:16.799289    5603 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:16.799301    5603 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:16.799346    5603 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:51:16.799367    5603 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:16.799375    5603 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:16.799770    5603 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:51:16.942965    5603 main.go:141] libmachine: Creating SSH key...
	I0307 14:51:16.978403    5603 main.go:141] libmachine: Creating Disk image...
	I0307 14:51:16.978408    5603 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:51:16.978616    5603 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/disk.qcow2
	I0307 14:51:16.990988    5603 main.go:141] libmachine: STDOUT: 
	I0307 14:51:16.991014    5603 main.go:141] libmachine: STDERR: 
	I0307 14:51:16.991070    5603 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/disk.qcow2 +20000M
	I0307 14:51:17.001924    5603 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:51:17.001941    5603 main.go:141] libmachine: STDERR: 
	I0307 14:51:17.001962    5603 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/disk.qcow2
	I0307 14:51:17.001967    5603 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:51:17.002002    5603 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:f6:04:09:ce:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/disk.qcow2
	I0307 14:51:17.003747    5603 main.go:141] libmachine: STDOUT: 
	I0307 14:51:17.003764    5603 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:17.003784    5603 client.go:171] duration metric: took 204.574958ms to LocalClient.Create
	I0307 14:51:19.006235    5603 start.go:128] duration metric: took 2.232638417s to createHost
	I0307 14:51:19.006337    5603 start.go:83] releasing machines lock for "kubenet-581000", held for 2.232822s
	W0307 14:51:19.006400    5603 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:19.016577    5603 out.go:177] * Deleting "kubenet-581000" in qemu2 ...
	W0307 14:51:19.047892    5603 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:19.047932    5603 start.go:728] Will try again in 5 seconds ...
	I0307 14:51:24.050155    5603 start.go:360] acquireMachinesLock for kubenet-581000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:24.050744    5603 start.go:364] duration metric: took 421.625µs to acquireMachinesLock for "kubenet-581000"
	I0307 14:51:24.050897    5603 start.go:93] Provisioning new machine with config: &{Name:kubenet-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:51:24.051129    5603 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:51:24.059601    5603 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 14:51:24.108113    5603 start.go:159] libmachine.API.Create for "kubenet-581000" (driver="qemu2")
	I0307 14:51:24.108172    5603 client.go:168] LocalClient.Create starting
	I0307 14:51:24.108309    5603 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:51:24.108374    5603 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:24.108389    5603 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:24.108467    5603 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:51:24.108522    5603 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:24.108533    5603 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:24.109089    5603 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:51:24.262927    5603 main.go:141] libmachine: Creating SSH key...
	I0307 14:51:24.353415    5603 main.go:141] libmachine: Creating Disk image...
	I0307 14:51:24.353421    5603 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:51:24.353613    5603 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/disk.qcow2
	I0307 14:51:24.366301    5603 main.go:141] libmachine: STDOUT: 
	I0307 14:51:24.366332    5603 main.go:141] libmachine: STDERR: 
	I0307 14:51:24.366388    5603 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/disk.qcow2 +20000M
	I0307 14:51:24.377319    5603 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:51:24.377338    5603 main.go:141] libmachine: STDERR: 
	I0307 14:51:24.377350    5603 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/disk.qcow2
	I0307 14:51:24.377356    5603 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:51:24.377396    5603 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:dd:78:39:9b:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/kubenet-581000/disk.qcow2
	I0307 14:51:24.379243    5603 main.go:141] libmachine: STDOUT: 
	I0307 14:51:24.379259    5603 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:24.379272    5603 client.go:171] duration metric: took 271.077834ms to LocalClient.Create
	I0307 14:51:26.381584    5603 start.go:128] duration metric: took 2.330289583s to createHost
	I0307 14:51:26.381677    5603 start.go:83] releasing machines lock for "kubenet-581000", held for 2.330774875s
	W0307 14:51:26.382065    5603 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:26.393665    5603 out.go:177] 
	W0307 14:51:26.401746    5603 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:51:26.401797    5603 out.go:239] * 
	* 
	W0307 14:51:26.404577    5603 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:51:26.414621    5603 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.78s)
E0307 14:52:13.965719    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-935000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-935000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.821738s)

                                                
                                                
-- stdout --
	* [old-k8s-version-935000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-935000" primary control-plane node in "old-k8s-version-935000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-935000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:51:28.721494    5725 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:51:28.721730    5725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:28.721736    5725 out.go:304] Setting ErrFile to fd 2...
	I0307 14:51:28.721739    5725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:28.722030    5725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:51:28.723351    5725 out.go:298] Setting JSON to false
	I0307 14:51:28.740528    5725 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4845,"bootTime":1709847043,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:51:28.740602    5725 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:51:28.744797    5725 out.go:177] * [old-k8s-version-935000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:51:28.751802    5725 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:51:28.751887    5725 notify.go:220] Checking for updates...
	I0307 14:51:28.754800    5725 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:51:28.757830    5725 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:51:28.759464    5725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:51:28.762833    5725 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:51:28.765839    5725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:51:28.769188    5725 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:51:28.769257    5725 config.go:182] Loaded profile config "stopped-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 14:51:28.769306    5725 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:51:28.773755    5725 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:51:28.780822    5725 start.go:297] selected driver: qemu2
	I0307 14:51:28.780829    5725 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:51:28.780836    5725 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:51:28.783194    5725 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:51:28.786818    5725 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:51:28.789888    5725 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:51:28.789923    5725 cni.go:84] Creating CNI manager for ""
	I0307 14:51:28.789930    5725 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 14:51:28.789956    5725 start.go:340] cluster config:
	{Name:old-k8s-version-935000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:51:28.794425    5725 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:28.801842    5725 out.go:177] * Starting "old-k8s-version-935000" primary control-plane node in "old-k8s-version-935000" cluster
	I0307 14:51:28.805816    5725 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 14:51:28.805833    5725 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 14:51:28.805845    5725 cache.go:56] Caching tarball of preloaded images
	I0307 14:51:28.805914    5725 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:51:28.805920    5725 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 14:51:28.805995    5725 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/old-k8s-version-935000/config.json ...
	I0307 14:51:28.806011    5725 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/old-k8s-version-935000/config.json: {Name:mk88d80bcc005269c2817d4f84d3c34587c6c6f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:51:28.806237    5725 start.go:360] acquireMachinesLock for old-k8s-version-935000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:28.806270    5725 start.go:364] duration metric: took 24.458µs to acquireMachinesLock for "old-k8s-version-935000"
	I0307 14:51:28.806280    5725 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:51:28.806306    5725 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:51:28.814810    5725 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:51:28.830636    5725 start.go:159] libmachine.API.Create for "old-k8s-version-935000" (driver="qemu2")
	I0307 14:51:28.830660    5725 client.go:168] LocalClient.Create starting
	I0307 14:51:28.830717    5725 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:51:28.830748    5725 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:28.830759    5725 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:28.830800    5725 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:51:28.830830    5725 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:28.830835    5725 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:28.831236    5725 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:51:28.974791    5725 main.go:141] libmachine: Creating SSH key...
	I0307 14:51:29.076054    5725 main.go:141] libmachine: Creating Disk image...
	I0307 14:51:29.076062    5725 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:51:29.076266    5725 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0307 14:51:29.088827    5725 main.go:141] libmachine: STDOUT: 
	I0307 14:51:29.088844    5725 main.go:141] libmachine: STDERR: 
	I0307 14:51:29.088902    5725 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2 +20000M
	I0307 14:51:29.099780    5725 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:51:29.099798    5725 main.go:141] libmachine: STDERR: 
	I0307 14:51:29.099816    5725 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0307 14:51:29.099821    5725 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:51:29.099879    5725 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:8b:bb:a6:f7:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0307 14:51:29.101588    5725 main.go:141] libmachine: STDOUT: 
	I0307 14:51:29.101606    5725 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:29.101631    5725 client.go:171] duration metric: took 270.953084ms to LocalClient.Create
	I0307 14:51:31.103832    5725 start.go:128] duration metric: took 2.297417875s to createHost
	I0307 14:51:31.103849    5725 start.go:83] releasing machines lock for "old-k8s-version-935000", held for 2.297476125s
	W0307 14:51:31.103861    5725 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:31.114247    5725 out.go:177] * Deleting "old-k8s-version-935000" in qemu2 ...
	W0307 14:51:31.133142    5725 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:31.133154    5725 start.go:728] Will try again in 5 seconds ...
	I0307 14:51:36.135526    5725 start.go:360] acquireMachinesLock for old-k8s-version-935000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:36.135958    5725 start.go:364] duration metric: took 326.75µs to acquireMachinesLock for "old-k8s-version-935000"
	I0307 14:51:36.136111    5725 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:51:36.136379    5725 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:51:36.142272    5725 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:51:36.190309    5725 start.go:159] libmachine.API.Create for "old-k8s-version-935000" (driver="qemu2")
	I0307 14:51:36.190353    5725 client.go:168] LocalClient.Create starting
	I0307 14:51:36.190460    5725 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:51:36.190517    5725 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:36.190537    5725 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:36.190613    5725 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:51:36.190653    5725 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:36.190668    5725 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:36.191222    5725 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:51:36.347536    5725 main.go:141] libmachine: Creating SSH key...
	I0307 14:51:36.425917    5725 main.go:141] libmachine: Creating Disk image...
	I0307 14:51:36.425923    5725 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:51:36.426127    5725 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0307 14:51:36.438801    5725 main.go:141] libmachine: STDOUT: 
	I0307 14:51:36.438815    5725 main.go:141] libmachine: STDERR: 
	I0307 14:51:36.438881    5725 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2 +20000M
	I0307 14:51:36.450089    5725 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:51:36.450102    5725 main.go:141] libmachine: STDERR: 
	I0307 14:51:36.450116    5725 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0307 14:51:36.450120    5725 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:51:36.450174    5725 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:fd:87:23:a1:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0307 14:51:36.451930    5725 main.go:141] libmachine: STDOUT: 
	I0307 14:51:36.451941    5725 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:36.451955    5725 client.go:171] duration metric: took 261.586708ms to LocalClient.Create
	I0307 14:51:38.454216    5725 start.go:128] duration metric: took 2.317728416s to createHost
	I0307 14:51:38.454291    5725 start.go:83] releasing machines lock for "old-k8s-version-935000", held for 2.318236459s
	W0307 14:51:38.454717    5725 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:38.468114    5725 out.go:177] 
	W0307 14:51:38.472403    5725 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:51:38.472528    5725 out.go:239] * 
	* 
	W0307 14:51:38.475007    5725 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:51:38.491359    5725 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-935000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (67.264542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-303000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-303000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (10.039483792s)

                                                
                                                
-- stdout --
	* [no-preload-303000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-303000" primary control-plane node in "no-preload-303000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-303000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:51:30.945968    5739 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:51:30.946139    5739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:30.946143    5739 out.go:304] Setting ErrFile to fd 2...
	I0307 14:51:30.946149    5739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:30.946284    5739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:51:30.947237    5739 out.go:298] Setting JSON to false
	I0307 14:51:30.963298    5739 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4847,"bootTime":1709847043,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:51:30.963388    5739 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:51:30.970304    5739 out.go:177] * [no-preload-303000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:51:30.978144    5739 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:51:30.987348    5739 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:51:30.978184    5739 notify.go:220] Checking for updates...
	I0307 14:51:30.990241    5739 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:51:30.993165    5739 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:51:30.996220    5739 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:51:30.999275    5739 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:51:31.001152    5739 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:51:31.001229    5739 config.go:182] Loaded profile config "old-k8s-version-935000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0307 14:51:31.001283    5739 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:51:31.005274    5739 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:51:31.012128    5739 start.go:297] selected driver: qemu2
	I0307 14:51:31.012134    5739 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:51:31.012143    5739 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:51:31.014440    5739 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:51:31.018202    5739 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:51:31.021311    5739 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:51:31.021352    5739 cni.go:84] Creating CNI manager for ""
	I0307 14:51:31.021359    5739 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:51:31.021364    5739 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 14:51:31.021387    5739 start.go:340] cluster config:
	{Name:no-preload-303000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-303000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:51:31.026178    5739 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:31.033240    5739 out.go:177] * Starting "no-preload-303000" primary control-plane node in "no-preload-303000" cluster
	I0307 14:51:31.037286    5739 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 14:51:31.037395    5739 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/no-preload-303000/config.json ...
	I0307 14:51:31.037415    5739 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/no-preload-303000/config.json: {Name:mk5d3dde0a4f2f12b0735a0e953b39e558e923cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:51:31.037418    5739 cache.go:107] acquiring lock: {Name:mkfdec6903d3d5ee53708053d4a89336c4a1f76b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:31.037433    5739 cache.go:107] acquiring lock: {Name:mk9357a2edafe692e1fbd2b75214904aa2f03992 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:31.037494    5739 cache.go:115] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0307 14:51:31.037505    5739 cache.go:107] acquiring lock: {Name:mk1a23ddd1989f69741e7a1790c917155c776d78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:31.037515    5739 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 98.333µs
	I0307 14:51:31.037523    5739 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0307 14:51:31.037535    5739 cache.go:107] acquiring lock: {Name:mke25f8219d9d99051e3f51f4e7c9a1a60ebc90c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:31.037625    5739 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0307 14:51:31.037434    5739 cache.go:107] acquiring lock: {Name:mkf6aa68dabf1b5803ed34fb53a8c3709dedc9a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:31.037642    5739 cache.go:107] acquiring lock: {Name:mk998a118686b4a89e26f3f9743b371a9803f96c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:31.037712    5739 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0307 14:51:31.037714    5739 start.go:360] acquireMachinesLock for no-preload-303000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:31.037724    5739 cache.go:107] acquiring lock: {Name:mk9e36072f43a2cdac6a9dd1116bea0eb2adb479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:31.037768    5739 cache.go:107] acquiring lock: {Name:mk877c09a821f5f45a45516f48306d18ce0f42ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:31.037823    5739 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0307 14:51:31.037880    5739 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0307 14:51:31.037882    5739 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0307 14:51:31.037937    5739 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0307 14:51:31.038058    5739 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0307 14:51:31.044206    5739 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0307 14:51:31.044279    5739 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0307 14:51:31.044686    5739 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0307 14:51:31.044864    5739 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0307 14:51:31.045073    5739 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0307 14:51:31.045242    5739 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0307 14:51:31.045256    5739 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0307 14:51:31.103906    5739 start.go:364] duration metric: took 66.176833ms to acquireMachinesLock for "no-preload-303000"
	I0307 14:51:31.103949    5739 start.go:93] Provisioning new machine with config: &{Name:no-preload-303000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-303000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:51:31.104411    5739 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:51:31.122335    5739 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:51:31.138129    5739 start.go:159] libmachine.API.Create for "no-preload-303000" (driver="qemu2")
	I0307 14:51:31.138231    5739 client.go:168] LocalClient.Create starting
	I0307 14:51:31.138297    5739 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:51:31.138323    5739 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:31.138332    5739 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:31.138368    5739 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:51:31.138388    5739 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:31.138393    5739 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:31.140583    5739 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:51:31.288216    5739 main.go:141] libmachine: Creating SSH key...
	I0307 14:51:31.404751    5739 main.go:141] libmachine: Creating Disk image...
	I0307 14:51:31.404765    5739 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:51:31.405206    5739 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2
	I0307 14:51:31.417626    5739 main.go:141] libmachine: STDOUT: 
	I0307 14:51:31.417670    5739 main.go:141] libmachine: STDERR: 
	I0307 14:51:31.417727    5739 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2 +20000M
	I0307 14:51:31.428467    5739 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:51:31.428482    5739 main.go:141] libmachine: STDERR: 
	I0307 14:51:31.428500    5739 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2
	I0307 14:51:31.428507    5739 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:51:31.428532    5739 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:76:fe:7a:fd:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2
	I0307 14:51:31.430412    5739 main.go:141] libmachine: STDOUT: 
	I0307 14:51:31.430431    5739 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:31.430451    5739 client.go:171] duration metric: took 292.20275ms to LocalClient.Create
	I0307 14:51:32.971916    5739 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0307 14:51:33.108391    5739 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0307 14:51:33.113182    5739 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0307 14:51:33.118310    5739 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0307 14:51:33.119195    5739 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0307 14:51:33.142354    5739 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0307 14:51:33.151478    5739 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0307 14:51:33.231570    5739 cache.go:157] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0307 14:51:33.231630    5739 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.194057791s
	I0307 14:51:33.231656    5739 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0307 14:51:33.430900    5739 start.go:128] duration metric: took 2.32636325s to createHost
	I0307 14:51:33.430949    5739 start.go:83] releasing machines lock for "no-preload-303000", held for 2.326925375s
	W0307 14:51:33.431019    5739 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:33.445901    5739 out.go:177] * Deleting "no-preload-303000" in qemu2 ...
	W0307 14:51:33.484010    5739 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:33.484041    5739 start.go:728] Will try again in 5 seconds ...
	I0307 14:51:35.505164    5739 cache.go:157] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0307 14:51:35.505243    5739 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 4.467439042s
	I0307 14:51:35.505272    5739 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0307 14:51:35.706157    5739 cache.go:157] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0307 14:51:35.706207    5739 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.668331625s
	I0307 14:51:35.706232    5739 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0307 14:51:36.443911    5739 cache.go:157] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0307 14:51:36.443921    5739 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 5.406073625s
	I0307 14:51:36.443929    5739 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0307 14:51:36.636525    5739 cache.go:157] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0307 14:51:36.636544    5739 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 5.598921875s
	I0307 14:51:36.636556    5739 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0307 14:51:38.070581    5739 cache.go:157] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0307 14:51:38.070635    5739 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 7.032964916s
	I0307 14:51:38.070659    5739 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0307 14:51:38.484464    5739 start.go:360] acquireMachinesLock for no-preload-303000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:38.484802    5739 start.go:364] duration metric: took 284.625µs to acquireMachinesLock for "no-preload-303000"
	I0307 14:51:38.484931    5739 start.go:93] Provisioning new machine with config: &{Name:no-preload-303000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-303000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:51:38.485185    5739 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:51:38.504322    5739 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:51:38.553778    5739 start.go:159] libmachine.API.Create for "no-preload-303000" (driver="qemu2")
	I0307 14:51:38.553948    5739 client.go:168] LocalClient.Create starting
	I0307 14:51:38.554042    5739 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:51:38.554114    5739 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:38.554129    5739 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:38.554200    5739 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:51:38.554228    5739 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:38.554241    5739 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:38.554742    5739 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:51:38.759520    5739 main.go:141] libmachine: Creating SSH key...
	I0307 14:51:38.893523    5739 main.go:141] libmachine: Creating Disk image...
	I0307 14:51:38.893535    5739 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:51:38.893692    5739 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2
	I0307 14:51:38.905925    5739 main.go:141] libmachine: STDOUT: 
	I0307 14:51:38.905952    5739 main.go:141] libmachine: STDERR: 
	I0307 14:51:38.906039    5739 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2 +20000M
	I0307 14:51:38.917887    5739 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:51:38.917950    5739 main.go:141] libmachine: STDERR: 
	I0307 14:51:38.917963    5739 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2
	I0307 14:51:38.917974    5739 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:51:38.918025    5739 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ba:a6:6b:07:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2
	I0307 14:51:38.919923    5739 main.go:141] libmachine: STDOUT: 
	I0307 14:51:38.919941    5739 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:38.919956    5739 client.go:171] duration metric: took 365.991458ms to LocalClient.Create
	I0307 14:51:40.920930    5739 start.go:128] duration metric: took 2.435638333s to createHost
	I0307 14:51:40.920954    5739 start.go:83] releasing machines lock for "no-preload-303000", held for 2.436066708s
	W0307 14:51:40.921029    5739 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-303000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-303000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:40.926047    5739 out.go:177] 
	W0307 14:51:40.933983    5739 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:51:40.933990    5739 out.go:239] * 
	* 
	W0307 14:51:40.934535    5739 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:51:40.945046    5739 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-303000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000: exit status 7 (32.275167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-303000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-935000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-935000 create -f testdata/busybox.yaml: exit status 1 (31.339959ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-935000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-935000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (35.602417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (35.870375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-935000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-935000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-935000 describe deploy/metrics-server -n kube-system: exit status 1 (27.724084ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-935000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-935000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (32.547334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-935000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-935000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.202563459s)

                                                
                                                
-- stdout --
	* [old-k8s-version-935000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-935000" primary control-plane node in "old-k8s-version-935000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-935000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-935000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:51:41.013909    5826 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:51:41.014029    5826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:41.014032    5826 out.go:304] Setting ErrFile to fd 2...
	I0307 14:51:41.014035    5826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:41.014174    5826 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:51:41.015341    5826 out.go:298] Setting JSON to false
	I0307 14:51:41.033683    5826 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4858,"bootTime":1709847043,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:51:41.033782    5826 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:51:41.038051    5826 out.go:177] * [old-k8s-version-935000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:51:41.044068    5826 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:51:41.047967    5826 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:51:41.044095    5826 notify.go:220] Checking for updates...
	I0307 14:51:41.057034    5826 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:51:41.064043    5826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:51:41.065545    5826 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:51:41.071999    5826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:51:41.073545    5826 config.go:182] Loaded profile config "old-k8s-version-935000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0307 14:51:41.077002    5826 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 14:51:41.080017    5826 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:51:41.083831    5826 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 14:51:41.093003    5826 start.go:297] selected driver: qemu2
	I0307 14:51:41.093010    5826 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:51:41.093068    5826 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:51:41.095449    5826 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:51:41.095504    5826 cni.go:84] Creating CNI manager for ""
	I0307 14:51:41.095511    5826 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 14:51:41.095531    5826 start.go:340] cluster config:
	{Name:old-k8s-version-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-935000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:51:41.099680    5826 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:41.107998    5826 out.go:177] * Starting "old-k8s-version-935000" primary control-plane node in "old-k8s-version-935000" cluster
	I0307 14:51:41.110957    5826 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 14:51:41.111038    5826 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 14:51:41.111073    5826 cache.go:56] Caching tarball of preloaded images
	I0307 14:51:41.111260    5826 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:51:41.111271    5826 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 14:51:41.111351    5826 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/old-k8s-version-935000/config.json ...
	I0307 14:51:41.111857    5826 start.go:360] acquireMachinesLock for old-k8s-version-935000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:41.111890    5826 start.go:364] duration metric: took 21.958µs to acquireMachinesLock for "old-k8s-version-935000"
	I0307 14:51:41.111899    5826 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:51:41.111904    5826 fix.go:54] fixHost starting: 
	I0307 14:51:41.112022    5826 fix.go:112] recreateIfNeeded on old-k8s-version-935000: state=Stopped err=<nil>
	W0307 14:51:41.112035    5826 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:51:41.115983    5826 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-935000" ...
	I0307 14:51:41.123053    5826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:fd:87:23:a1:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0307 14:51:41.124964    5826 main.go:141] libmachine: STDOUT: 
	I0307 14:51:41.124984    5826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:41.125010    5826 fix.go:56] duration metric: took 13.105875ms for fixHost
	I0307 14:51:41.125015    5826 start.go:83] releasing machines lock for "old-k8s-version-935000", held for 13.120583ms
	W0307 14:51:41.125022    5826 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:51:41.125056    5826 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:41.125060    5826 start.go:728] Will try again in 5 seconds ...
	I0307 14:51:46.127482    5826 start.go:360] acquireMachinesLock for old-k8s-version-935000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:46.127855    5826 start.go:364] duration metric: took 270.709µs to acquireMachinesLock for "old-k8s-version-935000"
	I0307 14:51:46.127936    5826 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:51:46.127956    5826 fix.go:54] fixHost starting: 
	I0307 14:51:46.128575    5826 fix.go:112] recreateIfNeeded on old-k8s-version-935000: state=Stopped err=<nil>
	W0307 14:51:46.128600    5826 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:51:46.133863    5826 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-935000" ...
	I0307 14:51:46.140181    5826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:fd:87:23:a1:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0307 14:51:46.149734    5826 main.go:141] libmachine: STDOUT: 
	I0307 14:51:46.149814    5826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:46.149909    5826 fix.go:56] duration metric: took 21.937292ms for fixHost
	I0307 14:51:46.149925    5826 start.go:83] releasing machines lock for "old-k8s-version-935000", held for 22.047667ms
	W0307 14:51:46.150213    5826 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-935000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-935000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:46.158930    5826 out.go:177] 
	W0307 14:51:46.162061    5826 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:51:46.162112    5826 out.go:239] * 
	* 
	W0307 14:51:46.164962    5826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:51:46.172986    5826 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-935000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (69.694625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-303000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-303000 create -f testdata/busybox.yaml: exit status 1 (29.714625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-303000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-303000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000: exit status 7 (37.768583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-303000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000: exit status 7 (35.874625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-303000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-303000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-303000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-303000 describe deploy/metrics-server -n kube-system: exit status 1 (26.483ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-303000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-303000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000: exit status 7 (31.078125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-303000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-303000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-303000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.197634042s)

                                                
                                                
-- stdout --
	* [no-preload-303000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-303000" primary control-plane node in "no-preload-303000" cluster
	* Restarting existing qemu2 VM for "no-preload-303000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-303000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:51:45.198831    5867 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:51:45.198979    5867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:45.198982    5867 out.go:304] Setting ErrFile to fd 2...
	I0307 14:51:45.198984    5867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:45.199108    5867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:51:45.200107    5867 out.go:298] Setting JSON to false
	I0307 14:51:45.216214    5867 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4862,"bootTime":1709847043,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:51:45.216304    5867 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:51:45.221205    5867 out.go:177] * [no-preload-303000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:51:45.227266    5867 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:51:45.227370    5867 notify.go:220] Checking for updates...
	I0307 14:51:45.231151    5867 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:51:45.234178    5867 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:51:45.237219    5867 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:51:45.240143    5867 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:51:45.243188    5867 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:51:45.246442    5867 config.go:182] Loaded profile config "no-preload-303000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 14:51:45.246703    5867 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:51:45.251207    5867 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 14:51:45.258229    5867 start.go:297] selected driver: qemu2
	I0307 14:51:45.258235    5867 start.go:901] validating driver "qemu2" against &{Name:no-preload-303000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-303000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:51:45.258296    5867 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:51:45.260609    5867 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:51:45.260658    5867 cni.go:84] Creating CNI manager for ""
	I0307 14:51:45.260666    5867 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:51:45.260702    5867 start.go:340] cluster config:
	{Name:no-preload-303000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-303000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:51:45.265049    5867 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:45.273190    5867 out.go:177] * Starting "no-preload-303000" primary control-plane node in "no-preload-303000" cluster
	I0307 14:51:45.277172    5867 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 14:51:45.277261    5867 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/no-preload-303000/config.json ...
	I0307 14:51:45.277297    5867 cache.go:107] acquiring lock: {Name:mkfdec6903d3d5ee53708053d4a89336c4a1f76b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:45.277335    5867 cache.go:107] acquiring lock: {Name:mkf6aa68dabf1b5803ed34fb53a8c3709dedc9a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:45.277367    5867 cache.go:115] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0307 14:51:45.277373    5867 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 77.667µs
	I0307 14:51:45.277380    5867 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0307 14:51:45.277375    5867 cache.go:107] acquiring lock: {Name:mk9357a2edafe692e1fbd2b75214904aa2f03992 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:45.277386    5867 cache.go:107] acquiring lock: {Name:mk9e36072f43a2cdac6a9dd1116bea0eb2adb479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:45.277315    5867 cache.go:107] acquiring lock: {Name:mk998a118686b4a89e26f3f9743b371a9803f96c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:45.277449    5867 cache.go:115] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0307 14:51:45.277463    5867 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 102.458µs
	I0307 14:51:45.277470    5867 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0307 14:51:45.277467    5867 cache.go:107] acquiring lock: {Name:mk1a23ddd1989f69741e7a1790c917155c776d78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:45.277416    5867 cache.go:115] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0307 14:51:45.277497    5867 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 196.083µs
	I0307 14:51:45.277502    5867 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0307 14:51:45.277425    5867 cache.go:115] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0307 14:51:45.277431    5867 cache.go:107] acquiring lock: {Name:mk877c09a821f5f45a45516f48306d18ce0f42ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:45.277507    5867 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 121.875µs
	I0307 14:51:45.277511    5867 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0307 14:51:45.277429    5867 cache.go:107] acquiring lock: {Name:mke25f8219d9d99051e3f51f4e7c9a1a60ebc90c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:45.277525    5867 cache.go:115] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0307 14:51:45.277543    5867 cache.go:115] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0307 14:51:45.277544    5867 cache.go:115] /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0307 14:51:45.277579    5867 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 274.125µs
	I0307 14:51:45.277588    5867 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0307 14:51:45.277580    5867 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 150.917µs
	I0307 14:51:45.277597    5867 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0307 14:51:45.277570    5867 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 121.042µs
	I0307 14:51:45.277609    5867 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0307 14:51:45.277600    5867 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0307 14:51:45.277835    5867 start.go:360] acquireMachinesLock for no-preload-303000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:45.277867    5867 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "no-preload-303000"
	I0307 14:51:45.277877    5867 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:51:45.277883    5867 fix.go:54] fixHost starting: 
	I0307 14:51:45.278025    5867 fix.go:112] recreateIfNeeded on no-preload-303000: state=Stopped err=<nil>
	W0307 14:51:45.278036    5867 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:51:45.286159    5867 out.go:177] * Restarting existing qemu2 VM for "no-preload-303000" ...
	I0307 14:51:45.290247    5867 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ba:a6:6b:07:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2
	I0307 14:51:45.290884    5867 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0307 14:51:45.292619    5867 main.go:141] libmachine: STDOUT: 
	I0307 14:51:45.292647    5867 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:45.292685    5867 fix.go:56] duration metric: took 14.800541ms for fixHost
	I0307 14:51:45.292690    5867 start.go:83] releasing machines lock for "no-preload-303000", held for 14.81775ms
	W0307 14:51:45.292698    5867 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:51:45.292732    5867 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:45.292737    5867 start.go:728] Will try again in 5 seconds ...
	I0307 14:51:47.221413    5867 cache.go:162] opening:  /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0307 14:51:50.293103    5867 start.go:360] acquireMachinesLock for no-preload-303000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:50.293472    5867 start.go:364] duration metric: took 287.667µs to acquireMachinesLock for "no-preload-303000"
	I0307 14:51:50.293597    5867 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:51:50.293627    5867 fix.go:54] fixHost starting: 
	I0307 14:51:50.294324    5867 fix.go:112] recreateIfNeeded on no-preload-303000: state=Stopped err=<nil>
	W0307 14:51:50.294350    5867 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:51:50.312057    5867 out.go:177] * Restarting existing qemu2 VM for "no-preload-303000" ...
	I0307 14:51:50.316967    5867 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ba:a6:6b:07:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/no-preload-303000/disk.qcow2
	I0307 14:51:50.327348    5867 main.go:141] libmachine: STDOUT: 
	I0307 14:51:50.327433    5867 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:50.327530    5867 fix.go:56] duration metric: took 33.906167ms for fixHost
	I0307 14:51:50.327552    5867 start.go:83] releasing machines lock for "no-preload-303000", held for 34.056083ms
	W0307 14:51:50.327773    5867 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-303000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-303000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:50.335894    5867 out.go:177] 
	W0307 14:51:50.339039    5867 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:51:50.339077    5867 out.go:239] * 
	* 
	W0307 14:51:50.341680    5867 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:51:50.351871    5867 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-303000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000: exit status 7 (66.898083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-303000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-935000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (33.855208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-935000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-935000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-935000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.557667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-935000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-935000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (31.131583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-935000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (30.696084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-935000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-935000 --alsologtostderr -v=1: exit status 83 (43.741125ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-935000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-935000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:51:46.452373    5890 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:51:46.452715    5890 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:46.452719    5890 out.go:304] Setting ErrFile to fd 2...
	I0307 14:51:46.452721    5890 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:46.452863    5890 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:51:46.453048    5890 out.go:298] Setting JSON to false
	I0307 14:51:46.453058    5890 mustload.go:65] Loading cluster: old-k8s-version-935000
	I0307 14:51:46.453268    5890 config.go:182] Loaded profile config "old-k8s-version-935000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0307 14:51:46.457615    5890 out.go:177] * The control-plane node old-k8s-version-935000 host is not running: state=Stopped
	I0307 14:51:46.461662    5890 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-935000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-935000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (30.757541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (30.336708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-616000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-616000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.908667208s)

                                                
                                                
-- stdout --
	* [embed-certs-616000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-616000" primary control-plane node in "embed-certs-616000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-616000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:51:46.918497    5913 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:51:46.918631    5913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:46.918635    5913 out.go:304] Setting ErrFile to fd 2...
	I0307 14:51:46.918637    5913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:46.918781    5913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:51:46.919840    5913 out.go:298] Setting JSON to false
	I0307 14:51:46.936020    5913 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4863,"bootTime":1709847043,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:51:46.936081    5913 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:51:46.941456    5913 out.go:177] * [embed-certs-616000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:51:46.952481    5913 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:51:46.947650    5913 notify.go:220] Checking for updates...
	I0307 14:51:46.959515    5913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:51:46.962540    5913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:51:46.969544    5913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:51:46.973491    5913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:51:46.976504    5913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:51:46.979852    5913 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:51:46.979930    5913 config.go:182] Loaded profile config "no-preload-303000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 14:51:46.979972    5913 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:51:46.983394    5913 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:51:46.990517    5913 start.go:297] selected driver: qemu2
	I0307 14:51:46.990525    5913 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:51:46.990531    5913 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:51:46.992916    5913 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:51:46.996504    5913 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:51:46.999620    5913 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:51:46.999666    5913 cni.go:84] Creating CNI manager for ""
	I0307 14:51:46.999674    5913 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:51:46.999678    5913 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 14:51:46.999713    5913 start.go:340] cluster config:
	{Name:embed-certs-616000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:51:47.004692    5913 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:47.011533    5913 out.go:177] * Starting "embed-certs-616000" primary control-plane node in "embed-certs-616000" cluster
	I0307 14:51:47.015524    5913 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:51:47.015542    5913 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:51:47.015554    5913 cache.go:56] Caching tarball of preloaded images
	I0307 14:51:47.015619    5913 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:51:47.015633    5913 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:51:47.015706    5913 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/embed-certs-616000/config.json ...
	I0307 14:51:47.015720    5913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/embed-certs-616000/config.json: {Name:mkd065a44c9c568f9f08c56ee637c375715374b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:51:47.015987    5913 start.go:360] acquireMachinesLock for embed-certs-616000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:47.016024    5913 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "embed-certs-616000"
	I0307 14:51:47.016037    5913 start.go:93] Provisioning new machine with config: &{Name:embed-certs-616000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:51:47.016075    5913 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:51:47.024525    5913 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:51:47.042645    5913 start.go:159] libmachine.API.Create for "embed-certs-616000" (driver="qemu2")
	I0307 14:51:47.042675    5913 client.go:168] LocalClient.Create starting
	I0307 14:51:47.042761    5913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:51:47.042800    5913 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:47.042815    5913 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:47.042862    5913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:51:47.042889    5913 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:47.042897    5913 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:47.043311    5913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:51:47.199511    5913 main.go:141] libmachine: Creating SSH key...
	I0307 14:51:47.406769    5913 main.go:141] libmachine: Creating Disk image...
	I0307 14:51:47.406784    5913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:51:47.407023    5913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2
	I0307 14:51:47.419911    5913 main.go:141] libmachine: STDOUT: 
	I0307 14:51:47.419930    5913 main.go:141] libmachine: STDERR: 
	I0307 14:51:47.419991    5913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2 +20000M
	I0307 14:51:47.430912    5913 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:51:47.430927    5913 main.go:141] libmachine: STDERR: 
	I0307 14:51:47.430938    5913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2
	I0307 14:51:47.430943    5913 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:51:47.430977    5913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:3a:d9:65:e4:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2
	I0307 14:51:47.432770    5913 main.go:141] libmachine: STDOUT: 
	I0307 14:51:47.432788    5913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:47.432811    5913 client.go:171] duration metric: took 390.121709ms to LocalClient.Create
	I0307 14:51:49.433266    5913 start.go:128] duration metric: took 2.417115417s to createHost
	I0307 14:51:49.433332    5913 start.go:83] releasing machines lock for "embed-certs-616000", held for 2.417249167s
	W0307 14:51:49.433412    5913 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:49.440515    5913 out.go:177] * Deleting "embed-certs-616000" in qemu2 ...
	W0307 14:51:49.471068    5913 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:49.471104    5913 start.go:728] Will try again in 5 seconds ...
	I0307 14:51:54.473399    5913 start.go:360] acquireMachinesLock for embed-certs-616000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:54.473847    5913 start.go:364] duration metric: took 303.542µs to acquireMachinesLock for "embed-certs-616000"
	I0307 14:51:54.473982    5913 start.go:93] Provisioning new machine with config: &{Name:embed-certs-616000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:51:54.474284    5913 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:51:54.482902    5913 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:51:54.532774    5913 start.go:159] libmachine.API.Create for "embed-certs-616000" (driver="qemu2")
	I0307 14:51:54.532901    5913 client.go:168] LocalClient.Create starting
	I0307 14:51:54.533031    5913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:51:54.533096    5913 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:54.533115    5913 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:54.533199    5913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:51:54.533242    5913 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:54.533253    5913 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:54.534470    5913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:51:54.698206    5913 main.go:141] libmachine: Creating SSH key...
	I0307 14:51:54.725874    5913 main.go:141] libmachine: Creating Disk image...
	I0307 14:51:54.725880    5913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:51:54.726064    5913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2
	I0307 14:51:54.738613    5913 main.go:141] libmachine: STDOUT: 
	I0307 14:51:54.738636    5913 main.go:141] libmachine: STDERR: 
	I0307 14:51:54.738706    5913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2 +20000M
	I0307 14:51:54.749443    5913 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:51:54.749461    5913 main.go:141] libmachine: STDERR: 
	I0307 14:51:54.749478    5913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2
	I0307 14:51:54.749483    5913 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:51:54.749527    5913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:61:d0:a0:13:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2
	I0307 14:51:54.751265    5913 main.go:141] libmachine: STDOUT: 
	I0307 14:51:54.751282    5913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:54.751294    5913 client.go:171] duration metric: took 218.383375ms to LocalClient.Create
	I0307 14:51:56.753497    5913 start.go:128] duration metric: took 2.279144917s to createHost
	I0307 14:51:56.753547    5913 start.go:83] releasing machines lock for "embed-certs-616000", held for 2.279638084s
	W0307 14:51:56.753877    5913 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:56.763511    5913 out.go:177] 
	W0307 14:51:56.770386    5913 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:51:56.770435    5913 out.go:239] * 
	* 
	W0307 14:51:56.773300    5913 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:51:56.782478    5913 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-616000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (67.244833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-303000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000: exit status 7 (32.496083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-303000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-303000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-303000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-303000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.642459ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-303000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-303000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000: exit status 7 (30.23675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-303000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-303000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000: exit status 7 (30.84775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-303000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-303000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-303000 --alsologtostderr -v=1: exit status 83 (43.683625ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-303000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-303000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:51:50.627407    5939 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:51:50.627567    5939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:50.627570    5939 out.go:304] Setting ErrFile to fd 2...
	I0307 14:51:50.627572    5939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:50.627689    5939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:51:50.627908    5939 out.go:298] Setting JSON to false
	I0307 14:51:50.627917    5939 mustload.go:65] Loading cluster: no-preload-303000
	I0307 14:51:50.628111    5939 config.go:182] Loaded profile config "no-preload-303000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 14:51:50.632907    5939 out.go:177] * The control-plane node no-preload-303000 host is not running: state=Stopped
	I0307 14:51:50.635950    5939 out.go:177]   To start a cluster, run: "minikube start -p no-preload-303000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-303000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000: exit status 7 (30.213917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-303000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000: exit status 7 (30.686833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-303000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-671000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-671000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.874862625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-671000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-671000" primary control-plane node in "default-k8s-diff-port-671000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-671000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:51:51.329161    5974 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:51:51.329293    5974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:51.329297    5974 out.go:304] Setting ErrFile to fd 2...
	I0307 14:51:51.329299    5974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:51:51.329415    5974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:51:51.330498    5974 out.go:298] Setting JSON to false
	I0307 14:51:51.346851    5974 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4868,"bootTime":1709847043,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:51:51.346951    5974 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:51:51.351612    5974 out.go:177] * [default-k8s-diff-port-671000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:51:51.357553    5974 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:51:51.357624    5974 notify.go:220] Checking for updates...
	I0307 14:51:51.361602    5974 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:51:51.364560    5974 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:51:51.367573    5974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:51:51.370525    5974 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:51:51.373500    5974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:51:51.376805    5974 config.go:182] Loaded profile config "embed-certs-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:51:51.376863    5974 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:51:51.376907    5974 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:51:51.381539    5974 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:51:51.388578    5974 start.go:297] selected driver: qemu2
	I0307 14:51:51.388587    5974 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:51:51.388597    5974 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:51:51.390828    5974 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 14:51:51.394546    5974 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:51:51.397648    5974 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:51:51.397706    5974 cni.go:84] Creating CNI manager for ""
	I0307 14:51:51.397714    5974 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:51:51.397719    5974 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 14:51:51.397752    5974 start.go:340] cluster config:
	{Name:default-k8s-diff-port-671000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:51:51.402189    5974 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:51:51.409627    5974 out.go:177] * Starting "default-k8s-diff-port-671000" primary control-plane node in "default-k8s-diff-port-671000" cluster
	I0307 14:51:51.413501    5974 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:51:51.413516    5974 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:51:51.413528    5974 cache.go:56] Caching tarball of preloaded images
	I0307 14:51:51.413602    5974 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:51:51.413608    5974 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:51:51.413675    5974 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/default-k8s-diff-port-671000/config.json ...
	I0307 14:51:51.413686    5974 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/default-k8s-diff-port-671000/config.json: {Name:mk006df80ca91ee0f6d74d78ebcf185076d51674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:51:51.414108    5974 start.go:360] acquireMachinesLock for default-k8s-diff-port-671000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:51.414148    5974 start.go:364] duration metric: took 32.833µs to acquireMachinesLock for "default-k8s-diff-port-671000"
	I0307 14:51:51.414159    5974 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:51:51.414188    5974 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:51:51.422540    5974 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:51:51.440217    5974 start.go:159] libmachine.API.Create for "default-k8s-diff-port-671000" (driver="qemu2")
	I0307 14:51:51.440249    5974 client.go:168] LocalClient.Create starting
	I0307 14:51:51.440320    5974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:51:51.440349    5974 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:51.440358    5974 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:51.440407    5974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:51:51.440429    5974 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:51.440435    5974 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:51.440919    5974 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:51:51.586607    5974 main.go:141] libmachine: Creating SSH key...
	I0307 14:51:51.649676    5974 main.go:141] libmachine: Creating Disk image...
	I0307 14:51:51.649681    5974 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:51:51.649873    5974 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2
	I0307 14:51:51.661915    5974 main.go:141] libmachine: STDOUT: 
	I0307 14:51:51.661934    5974 main.go:141] libmachine: STDERR: 
	I0307 14:51:51.661980    5974 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2 +20000M
	I0307 14:51:51.672643    5974 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:51:51.672665    5974 main.go:141] libmachine: STDERR: 
	I0307 14:51:51.672679    5974 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2
	I0307 14:51:51.672683    5974 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:51:51.672724    5974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:eb:e2:fc:36:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2
	I0307 14:51:51.674561    5974 main.go:141] libmachine: STDOUT: 
	I0307 14:51:51.674584    5974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:51.674604    5974 client.go:171] duration metric: took 234.344834ms to LocalClient.Create
	I0307 14:51:53.676871    5974 start.go:128] duration metric: took 2.262609834s to createHost
	I0307 14:51:53.677029    5974 start.go:83] releasing machines lock for "default-k8s-diff-port-671000", held for 2.262830917s
	W0307 14:51:53.677077    5974 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:53.691190    5974 out.go:177] * Deleting "default-k8s-diff-port-671000" in qemu2 ...
	W0307 14:51:53.720993    5974 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:51:53.721030    5974 start.go:728] Will try again in 5 seconds ...
	I0307 14:51:58.723278    5974 start.go:360] acquireMachinesLock for default-k8s-diff-port-671000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:51:58.723659    5974 start.go:364] duration metric: took 292.209µs to acquireMachinesLock for "default-k8s-diff-port-671000"
	I0307 14:51:58.723808    5974 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:51:58.724051    5974 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:51:58.729751    5974 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:51:58.777927    5974 start.go:159] libmachine.API.Create for "default-k8s-diff-port-671000" (driver="qemu2")
	I0307 14:51:58.777980    5974 client.go:168] LocalClient.Create starting
	I0307 14:51:58.778076    5974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:51:58.778128    5974 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:58.778145    5974 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:58.778208    5974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:51:58.778235    5974 main.go:141] libmachine: Decoding PEM data...
	I0307 14:51:58.778247    5974 main.go:141] libmachine: Parsing certificate...
	I0307 14:51:58.778767    5974 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:51:58.997491    5974 main.go:141] libmachine: Creating SSH key...
	I0307 14:51:59.096924    5974 main.go:141] libmachine: Creating Disk image...
	I0307 14:51:59.096930    5974 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:51:59.097104    5974 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2
	I0307 14:51:59.111225    5974 main.go:141] libmachine: STDOUT: 
	I0307 14:51:59.111254    5974 main.go:141] libmachine: STDERR: 
	I0307 14:51:59.111318    5974 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2 +20000M
	I0307 14:51:59.122031    5974 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:51:59.122052    5974 main.go:141] libmachine: STDERR: 
	I0307 14:51:59.122066    5974 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2
	I0307 14:51:59.122070    5974 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:51:59.122103    5974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:c5:62:d9:04:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2
	I0307 14:51:59.123730    5974 main.go:141] libmachine: STDOUT: 
	I0307 14:51:59.123750    5974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:51:59.123762    5974 client.go:171] duration metric: took 345.767417ms to LocalClient.Create
	I0307 14:52:01.125967    5974 start.go:128] duration metric: took 2.40185125s to createHost
	I0307 14:52:01.126049    5974 start.go:83] releasing machines lock for "default-k8s-diff-port-671000", held for 2.402327125s
	W0307 14:52:01.126321    5974 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:52:01.135916    5974 out.go:177] 
	W0307 14:52:01.144003    5974 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:52:01.144058    5974 out.go:239] * 
	* 
	W0307 14:52:01.146559    5974 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:52:01.157285    5974 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-671000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000: exit status 7 (66.747958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-616000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-616000 create -f testdata/busybox.yaml: exit status 1 (29.399125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-616000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-616000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (31.400958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (30.681917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-616000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-616000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-616000 describe deploy/metrics-server -n kube-system: exit status 1 (27.277375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-616000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-616000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (31.408209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-616000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-616000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.699498792s)

                                                
                                                
-- stdout --
	* [embed-certs-616000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-616000" primary control-plane node in "embed-certs-616000" cluster
	* Restarting existing qemu2 VM for "embed-certs-616000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-616000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:52:00.547260    6030 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:52:00.547394    6030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:00.547397    6030 out.go:304] Setting ErrFile to fd 2...
	I0307 14:52:00.547399    6030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:00.547513    6030 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:52:00.548520    6030 out.go:298] Setting JSON to false
	I0307 14:52:00.564645    6030 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4877,"bootTime":1709847043,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:52:00.564717    6030 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:52:00.569806    6030 out.go:177] * [embed-certs-616000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:52:00.576682    6030 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:52:00.580768    6030 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:52:00.576744    6030 notify.go:220] Checking for updates...
	I0307 14:52:00.586768    6030 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:52:00.589750    6030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:52:00.592748    6030 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:52:00.595720    6030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:52:00.599016    6030 config.go:182] Loaded profile config "embed-certs-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:52:00.599315    6030 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:52:00.603776    6030 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 14:52:00.610708    6030 start.go:297] selected driver: qemu2
	I0307 14:52:00.610715    6030 start.go:901] validating driver "qemu2" against &{Name:embed-certs-616000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:embed-certs-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:52:00.610807    6030 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:52:00.613138    6030 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:52:00.613180    6030 cni.go:84] Creating CNI manager for ""
	I0307 14:52:00.613186    6030 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:52:00.613213    6030 start.go:340] cluster config:
	{Name:embed-certs-616000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-616000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:52:00.617556    6030 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:52:00.622716    6030 out.go:177] * Starting "embed-certs-616000" primary control-plane node in "embed-certs-616000" cluster
	I0307 14:52:00.626733    6030 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:52:00.626749    6030 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:52:00.626765    6030 cache.go:56] Caching tarball of preloaded images
	I0307 14:52:00.626831    6030 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:52:00.626837    6030 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:52:00.626900    6030 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/embed-certs-616000/config.json ...
	I0307 14:52:00.627454    6030 start.go:360] acquireMachinesLock for embed-certs-616000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:52:01.126175    6030 start.go:364] duration metric: took 498.676125ms to acquireMachinesLock for "embed-certs-616000"
	I0307 14:52:01.126266    6030 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:52:01.126282    6030 fix.go:54] fixHost starting: 
	I0307 14:52:01.126939    6030 fix.go:112] recreateIfNeeded on embed-certs-616000: state=Stopped err=<nil>
	W0307 14:52:01.127000    6030 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:52:01.135915    6030 out.go:177] * Restarting existing qemu2 VM for "embed-certs-616000" ...
	I0307 14:52:01.144106    6030 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:61:d0:a0:13:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2
	I0307 14:52:01.154477    6030 main.go:141] libmachine: STDOUT: 
	I0307 14:52:01.154573    6030 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:52:01.154800    6030 fix.go:56] duration metric: took 28.50125ms for fixHost
	I0307 14:52:01.154823    6030 start.go:83] releasing machines lock for "embed-certs-616000", held for 28.600333ms
	W0307 14:52:01.154867    6030 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:52:01.155044    6030 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:52:01.155070    6030 start.go:728] Will try again in 5 seconds ...
	I0307 14:52:06.156814    6030 start.go:360] acquireMachinesLock for embed-certs-616000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:52:06.157168    6030 start.go:364] duration metric: took 273.25µs to acquireMachinesLock for "embed-certs-616000"
	I0307 14:52:06.157246    6030 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:52:06.157267    6030 fix.go:54] fixHost starting: 
	I0307 14:52:06.157995    6030 fix.go:112] recreateIfNeeded on embed-certs-616000: state=Stopped err=<nil>
	W0307 14:52:06.158023    6030 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:52:06.163633    6030 out.go:177] * Restarting existing qemu2 VM for "embed-certs-616000" ...
	I0307 14:52:06.170740    6030 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:61:d0:a0:13:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/embed-certs-616000/disk.qcow2
	I0307 14:52:06.180457    6030 main.go:141] libmachine: STDOUT: 
	I0307 14:52:06.180527    6030 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:52:06.180623    6030 fix.go:56] duration metric: took 23.352125ms for fixHost
	I0307 14:52:06.180643    6030 start.go:83] releasing machines lock for "embed-certs-616000", held for 23.454416ms
	W0307 14:52:06.180887    6030 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-616000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-616000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:52:06.187496    6030 out.go:177] 
	W0307 14:52:06.191563    6030 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:52:06.191585    6030 out.go:239] * 
	* 
	W0307 14:52:06.194151    6030 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:52:06.201540    6030 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-616000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (68.821667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-671000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-671000 create -f testdata/busybox.yaml: exit status 1 (30.68825ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-671000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-671000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000: exit status 7 (31.244708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-671000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000: exit status 7 (30.666834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-671000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-671000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-671000 describe deploy/metrics-server -n kube-system: exit status 1 (26.643708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-671000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-671000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000: exit status 7 (31.490667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-671000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-671000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.198268666s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-671000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-671000" primary control-plane node in "default-k8s-diff-port-671000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-671000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-671000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:52:04.763659    6072 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:52:04.763779    6072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:04.763782    6072 out.go:304] Setting ErrFile to fd 2...
	I0307 14:52:04.763785    6072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:04.763909    6072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:52:04.764912    6072 out.go:298] Setting JSON to false
	I0307 14:52:04.780894    6072 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4881,"bootTime":1709847043,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:52:04.780962    6072 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:52:04.785615    6072 out.go:177] * [default-k8s-diff-port-671000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:52:04.791631    6072 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:52:04.795688    6072 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:52:04.791683    6072 notify.go:220] Checking for updates...
	I0307 14:52:04.801544    6072 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:52:04.804595    6072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:52:04.806191    6072 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:52:04.813579    6072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:52:04.816926    6072 config.go:182] Loaded profile config "default-k8s-diff-port-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:52:04.817166    6072 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:52:04.820578    6072 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 14:52:04.827595    6072 start.go:297] selected driver: qemu2
	I0307 14:52:04.827600    6072 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:52:04.827646    6072 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:52:04.829910    6072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 14:52:04.829956    6072 cni.go:84] Creating CNI manager for ""
	I0307 14:52:04.829963    6072 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:52:04.829985    6072 start.go:340] cluster config:
	{Name:default-k8s-diff-port-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-671000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:52:04.834274    6072 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:52:04.841555    6072 out.go:177] * Starting "default-k8s-diff-port-671000" primary control-plane node in "default-k8s-diff-port-671000" cluster
	I0307 14:52:04.846431    6072 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 14:52:04.846448    6072 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 14:52:04.846459    6072 cache.go:56] Caching tarball of preloaded images
	I0307 14:52:04.846506    6072 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:52:04.846512    6072 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 14:52:04.846578    6072 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/default-k8s-diff-port-671000/config.json ...
	I0307 14:52:04.847126    6072 start.go:360] acquireMachinesLock for default-k8s-diff-port-671000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:52:04.847154    6072 start.go:364] duration metric: took 21.5µs to acquireMachinesLock for "default-k8s-diff-port-671000"
	I0307 14:52:04.847162    6072 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:52:04.847167    6072 fix.go:54] fixHost starting: 
	I0307 14:52:04.847296    6072 fix.go:112] recreateIfNeeded on default-k8s-diff-port-671000: state=Stopped err=<nil>
	W0307 14:52:04.847307    6072 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:52:04.851546    6072 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-671000" ...
	I0307 14:52:04.859543    6072 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:c5:62:d9:04:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2
	I0307 14:52:04.861703    6072 main.go:141] libmachine: STDOUT: 
	I0307 14:52:04.861727    6072 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:52:04.861759    6072 fix.go:56] duration metric: took 14.590208ms for fixHost
	I0307 14:52:04.861765    6072 start.go:83] releasing machines lock for "default-k8s-diff-port-671000", held for 14.606791ms
	W0307 14:52:04.861772    6072 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:52:04.861805    6072 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:52:04.861811    6072 start.go:728] Will try again in 5 seconds ...
	I0307 14:52:09.864088    6072 start.go:360] acquireMachinesLock for default-k8s-diff-port-671000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:52:09.864529    6072 start.go:364] duration metric: took 325.166µs to acquireMachinesLock for "default-k8s-diff-port-671000"
	I0307 14:52:09.864652    6072 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:52:09.864673    6072 fix.go:54] fixHost starting: 
	I0307 14:52:09.865466    6072 fix.go:112] recreateIfNeeded on default-k8s-diff-port-671000: state=Stopped err=<nil>
	W0307 14:52:09.865493    6072 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:52:09.881944    6072 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-671000" ...
	I0307 14:52:09.885011    6072 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:c5:62:d9:04:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/default-k8s-diff-port-671000/disk.qcow2
	I0307 14:52:09.894747    6072 main.go:141] libmachine: STDOUT: 
	I0307 14:52:09.894805    6072 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:52:09.894898    6072 fix.go:56] duration metric: took 30.2265ms for fixHost
	I0307 14:52:09.894919    6072 start.go:83] releasing machines lock for "default-k8s-diff-port-671000", held for 30.364875ms
	W0307 14:52:09.895123    6072 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-671000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-671000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:52:09.904870    6072 out.go:177] 
	W0307 14:52:09.907867    6072 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:52:09.907896    6072 out.go:239] * 
	* 
	W0307 14:52:09.910282    6072 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:52:09.918823    6072 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-671000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000: exit status 7 (68.431875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-616000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (33.498292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-616000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-616000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-616000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.3ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-616000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-616000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (30.90075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-616000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (31.386083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-616000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-616000 --alsologtostderr -v=1: exit status 83 (43.239875ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-616000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-616000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:52:06.480825    6091 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:52:06.480966    6091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:06.480969    6091 out.go:304] Setting ErrFile to fd 2...
	I0307 14:52:06.480971    6091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:06.481097    6091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:52:06.481310    6091 out.go:298] Setting JSON to false
	I0307 14:52:06.481320    6091 mustload.go:65] Loading cluster: embed-certs-616000
	I0307 14:52:06.481534    6091 config.go:182] Loaded profile config "embed-certs-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:52:06.484808    6091 out.go:177] * The control-plane node embed-certs-616000 host is not running: state=Stopped
	I0307 14:52:06.488653    6091 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-616000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-616000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (31.755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (30.907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-614000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-614000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (10.035099541s)

                                                
                                                
-- stdout --
	* [newest-cni-614000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-614000" primary control-plane node in "newest-cni-614000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-614000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:52:06.956488    6114 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:52:06.956641    6114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:06.956644    6114 out.go:304] Setting ErrFile to fd 2...
	I0307 14:52:06.956647    6114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:06.956769    6114 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:52:06.957869    6114 out.go:298] Setting JSON to false
	I0307 14:52:06.973837    6114 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4883,"bootTime":1709847043,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:52:06.973899    6114 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:52:06.979528    6114 out.go:177] * [newest-cni-614000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:52:06.987379    6114 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:52:06.987428    6114 notify.go:220] Checking for updates...
	I0307 14:52:06.991435    6114 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:52:06.994507    6114 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:52:06.997385    6114 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:52:07.000412    6114 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:52:07.003420    6114 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:52:07.006789    6114 config.go:182] Loaded profile config "default-k8s-diff-port-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:52:07.006854    6114 config.go:182] Loaded profile config "multinode-172000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:52:07.006913    6114 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:52:07.011429    6114 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 14:52:07.018373    6114 start.go:297] selected driver: qemu2
	I0307 14:52:07.018379    6114 start.go:901] validating driver "qemu2" against <nil>
	I0307 14:52:07.018385    6114 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:52:07.020681    6114 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0307 14:52:07.020713    6114 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0307 14:52:07.029386    6114 out.go:177] * Automatically selected the socket_vmnet network
	I0307 14:52:07.032514    6114 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0307 14:52:07.032555    6114 cni.go:84] Creating CNI manager for ""
	I0307 14:52:07.032562    6114 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:52:07.032575    6114 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 14:52:07.032622    6114 start.go:340] cluster config:
	{Name:newest-cni-614000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-614000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:52:07.037333    6114 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:52:07.044361    6114 out.go:177] * Starting "newest-cni-614000" primary control-plane node in "newest-cni-614000" cluster
	I0307 14:52:07.048421    6114 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 14:52:07.048439    6114 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 14:52:07.048452    6114 cache.go:56] Caching tarball of preloaded images
	I0307 14:52:07.048527    6114 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:52:07.048534    6114 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0307 14:52:07.048604    6114 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/newest-cni-614000/config.json ...
	I0307 14:52:07.048619    6114 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/newest-cni-614000/config.json: {Name:mkf335b1566c52647b6cae7f4067fecdd90d9e1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 14:52:07.048859    6114 start.go:360] acquireMachinesLock for newest-cni-614000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:52:07.048893    6114 start.go:364] duration metric: took 28µs to acquireMachinesLock for "newest-cni-614000"
	I0307 14:52:07.048905    6114 start.go:93] Provisioning new machine with config: &{Name:newest-cni-614000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-614000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:52:07.048932    6114 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:52:07.057397    6114 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:52:07.076113    6114 start.go:159] libmachine.API.Create for "newest-cni-614000" (driver="qemu2")
	I0307 14:52:07.076144    6114 client.go:168] LocalClient.Create starting
	I0307 14:52:07.076248    6114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:52:07.076307    6114 main.go:141] libmachine: Decoding PEM data...
	I0307 14:52:07.076316    6114 main.go:141] libmachine: Parsing certificate...
	I0307 14:52:07.076367    6114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:52:07.076393    6114 main.go:141] libmachine: Decoding PEM data...
	I0307 14:52:07.076402    6114 main.go:141] libmachine: Parsing certificate...
	I0307 14:52:07.076909    6114 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:52:07.220595    6114 main.go:141] libmachine: Creating SSH key...
	I0307 14:52:07.444992    6114 main.go:141] libmachine: Creating Disk image...
	I0307 14:52:07.445002    6114 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:52:07.445251    6114 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2
	I0307 14:52:07.458127    6114 main.go:141] libmachine: STDOUT: 
	I0307 14:52:07.458155    6114 main.go:141] libmachine: STDERR: 
	I0307 14:52:07.458204    6114 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2 +20000M
	I0307 14:52:07.469074    6114 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:52:07.469087    6114 main.go:141] libmachine: STDERR: 
	I0307 14:52:07.469099    6114 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2
	I0307 14:52:07.469104    6114 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:52:07.469135    6114 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:b2:36:79:23:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2
	I0307 14:52:07.470835    6114 main.go:141] libmachine: STDOUT: 
	I0307 14:52:07.470851    6114 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:52:07.470872    6114 client.go:171] duration metric: took 394.716792ms to LocalClient.Create
	I0307 14:52:09.473100    6114 start.go:128] duration metric: took 2.42410825s to createHost
	I0307 14:52:09.473160    6114 start.go:83] releasing machines lock for "newest-cni-614000", held for 2.424226666s
	W0307 14:52:09.473221    6114 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:52:09.479532    6114 out.go:177] * Deleting "newest-cni-614000" in qemu2 ...
	W0307 14:52:09.515014    6114 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:52:09.515058    6114 start.go:728] Will try again in 5 seconds ...
	I0307 14:52:14.517336    6114 start.go:360] acquireMachinesLock for newest-cni-614000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:52:14.517783    6114 start.go:364] duration metric: took 337.333µs to acquireMachinesLock for "newest-cni-614000"
	I0307 14:52:14.517930    6114 start.go:93] Provisioning new machine with config: &{Name:newest-cni-614000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-614000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 14:52:14.518258    6114 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 14:52:14.526881    6114 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 14:52:14.576187    6114 start.go:159] libmachine.API.Create for "newest-cni-614000" (driver="qemu2")
	I0307 14:52:14.576246    6114 client.go:168] LocalClient.Create starting
	I0307 14:52:14.576360    6114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/ca.pem
	I0307 14:52:14.576420    6114 main.go:141] libmachine: Decoding PEM data...
	I0307 14:52:14.576436    6114 main.go:141] libmachine: Parsing certificate...
	I0307 14:52:14.576506    6114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18320-1153/.minikube/certs/cert.pem
	I0307 14:52:14.576545    6114 main.go:141] libmachine: Decoding PEM data...
	I0307 14:52:14.576560    6114 main.go:141] libmachine: Parsing certificate...
	I0307 14:52:14.577378    6114 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 14:52:14.735262    6114 main.go:141] libmachine: Creating SSH key...
	I0307 14:52:14.888395    6114 main.go:141] libmachine: Creating Disk image...
	I0307 14:52:14.888401    6114 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 14:52:14.888617    6114 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2.raw /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2
	I0307 14:52:14.901607    6114 main.go:141] libmachine: STDOUT: 
	I0307 14:52:14.901632    6114 main.go:141] libmachine: STDERR: 
	I0307 14:52:14.901686    6114 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2 +20000M
	I0307 14:52:14.912279    6114 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 14:52:14.912298    6114 main.go:141] libmachine: STDERR: 
	I0307 14:52:14.912310    6114 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2
	I0307 14:52:14.912315    6114 main.go:141] libmachine: Starting QEMU VM...
	I0307 14:52:14.912349    6114 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:72:eb:a4:2a:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2
	I0307 14:52:14.914047    6114 main.go:141] libmachine: STDOUT: 
	I0307 14:52:14.914065    6114 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:52:14.914077    6114 client.go:171] duration metric: took 337.820792ms to LocalClient.Create
	I0307 14:52:16.916328    6114 start.go:128] duration metric: took 2.398006917s to createHost
	I0307 14:52:16.916402    6114 start.go:83] releasing machines lock for "newest-cni-614000", held for 2.398542875s
	W0307 14:52:16.916784    6114 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-614000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-614000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:52:16.926521    6114 out.go:177] 
	W0307 14:52:16.934622    6114 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:52:16.934708    6114 out.go:239] * 
	* 
	W0307 14:52:16.937344    6114 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:52:16.947492    6114 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-614000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-614000 -n newest-cni-614000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-614000 -n newest-cni-614000: exit status 7 (71.074083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-614000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-671000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000: exit status 7 (33.273292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-671000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-671000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-671000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.071916ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-671000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-671000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000: exit status 7 (31.152334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-671000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000: exit status 7 (31.213125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-671000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-671000 --alsologtostderr -v=1: exit status 83 (42.396041ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-671000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-671000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:52:10.197156    6140 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:52:10.197291    6140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:10.197294    6140 out.go:304] Setting ErrFile to fd 2...
	I0307 14:52:10.197296    6140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:10.197429    6140 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:52:10.197623    6140 out.go:298] Setting JSON to false
	I0307 14:52:10.197633    6140 mustload.go:65] Loading cluster: default-k8s-diff-port-671000
	I0307 14:52:10.197797    6140 config.go:182] Loaded profile config "default-k8s-diff-port-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 14:52:10.201823    6140 out.go:177] * The control-plane node default-k8s-diff-port-671000 host is not running: state=Stopped
	I0307 14:52:10.205848    6140 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-671000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-671000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000: exit status 7 (31.019292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-671000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000: exit status 7 (30.819625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-614000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-614000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.186306959s)

                                                
                                                
-- stdout --
	* [newest-cni-614000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-614000" primary control-plane node in "newest-cni-614000" cluster
	* Restarting existing qemu2 VM for "newest-cni-614000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-614000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:52:20.945502    6199 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:52:20.945625    6199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:20.945629    6199 out.go:304] Setting ErrFile to fd 2...
	I0307 14:52:20.945631    6199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:20.945751    6199 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:52:20.946769    6199 out.go:298] Setting JSON to false
	I0307 14:52:20.962870    6199 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4897,"bootTime":1709847043,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 14:52:20.962933    6199 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 14:52:20.966881    6199 out.go:177] * [newest-cni-614000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 14:52:20.973770    6199 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 14:52:20.973846    6199 notify.go:220] Checking for updates...
	I0307 14:52:20.977809    6199 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 14:52:20.980786    6199 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 14:52:20.983789    6199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 14:52:20.986789    6199 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 14:52:20.989765    6199 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 14:52:20.993025    6199 config.go:182] Loaded profile config "newest-cni-614000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 14:52:20.993292    6199 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 14:52:20.997762    6199 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 14:52:21.004745    6199 start.go:297] selected driver: qemu2
	I0307 14:52:21.004752    6199 start.go:901] validating driver "qemu2" against &{Name:newest-cni-614000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-614000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:52:21.004819    6199 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 14:52:21.007061    6199 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0307 14:52:21.007103    6199 cni.go:84] Creating CNI manager for ""
	I0307 14:52:21.007110    6199 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 14:52:21.007133    6199 start.go:340] cluster config:
	{Name:newest-cni-614000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-614000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 14:52:21.011405    6199 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 14:52:21.018619    6199 out.go:177] * Starting "newest-cni-614000" primary control-plane node in "newest-cni-614000" cluster
	I0307 14:52:21.022774    6199 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 14:52:21.022787    6199 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 14:52:21.022795    6199 cache.go:56] Caching tarball of preloaded images
	I0307 14:52:21.022845    6199 preload.go:173] Found /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 14:52:21.022854    6199 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0307 14:52:21.022917    6199 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/newest-cni-614000/config.json ...
	I0307 14:52:21.023468    6199 start.go:360] acquireMachinesLock for newest-cni-614000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:52:21.023498    6199 start.go:364] duration metric: took 21.084µs to acquireMachinesLock for "newest-cni-614000"
	I0307 14:52:21.023509    6199 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:52:21.023514    6199 fix.go:54] fixHost starting: 
	I0307 14:52:21.023627    6199 fix.go:112] recreateIfNeeded on newest-cni-614000: state=Stopped err=<nil>
	W0307 14:52:21.023637    6199 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:52:21.027789    6199 out.go:177] * Restarting existing qemu2 VM for "newest-cni-614000" ...
	I0307 14:52:21.035815    6199 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:72:eb:a4:2a:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2
	I0307 14:52:21.037887    6199 main.go:141] libmachine: STDOUT: 
	I0307 14:52:21.037910    6199 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:52:21.037940    6199 fix.go:56] duration metric: took 14.424334ms for fixHost
	I0307 14:52:21.037946    6199 start.go:83] releasing machines lock for "newest-cni-614000", held for 14.443625ms
	W0307 14:52:21.037951    6199 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:52:21.037993    6199 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:52:21.037998    6199 start.go:728] Will try again in 5 seconds ...
	I0307 14:52:26.040313    6199 start.go:360] acquireMachinesLock for newest-cni-614000: {Name:mk6de7c633b714084d6e2f7358f5de3d48a293fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 14:52:26.040770    6199 start.go:364] duration metric: took 311.875µs to acquireMachinesLock for "newest-cni-614000"
	I0307 14:52:26.040912    6199 start.go:96] Skipping create...Using existing machine configuration
	I0307 14:52:26.040934    6199 fix.go:54] fixHost starting: 
	I0307 14:52:26.041646    6199 fix.go:112] recreateIfNeeded on newest-cni-614000: state=Stopped err=<nil>
	W0307 14:52:26.041678    6199 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 14:52:26.051050    6199 out.go:177] * Restarting existing qemu2 VM for "newest-cni-614000" ...
	I0307 14:52:26.055197    6199 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:72:eb:a4:2a:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18320-1153/.minikube/machines/newest-cni-614000/disk.qcow2
	I0307 14:52:26.065262    6199 main.go:141] libmachine: STDOUT: 
	I0307 14:52:26.065339    6199 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 14:52:26.065435    6199 fix.go:56] duration metric: took 24.499625ms for fixHost
	I0307 14:52:26.065461    6199 start.go:83] releasing machines lock for "newest-cni-614000", held for 24.662625ms
	W0307 14:52:26.065669    6199 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-614000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-614000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 14:52:26.072916    6199 out.go:177] 
	W0307 14:52:26.077093    6199 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 14:52:26.077125    6199 out.go:239] * 
	* 
	W0307 14:52:26.079479    6199 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 14:52:26.087097    6199 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-614000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-614000 -n newest-cni-614000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-614000 -n newest-cni-614000: exit status 7 (73.640666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-614000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-614000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-614000 -n newest-cni-614000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-614000 -n newest-cni-614000: exit status 7 (31.492209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-614000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-614000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-614000 --alsologtostderr -v=1: exit status 83 (43.997417ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-614000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-614000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 14:52:26.281473    6217 out.go:291] Setting OutFile to fd 1 ...
	I0307 14:52:26.281638    6217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:26.281641    6217 out.go:304] Setting ErrFile to fd 2...
	I0307 14:52:26.281643    6217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 14:52:26.281775    6217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 14:52:26.281996    6217 out.go:298] Setting JSON to false
	I0307 14:52:26.282005    6217 mustload.go:65] Loading cluster: newest-cni-614000
	I0307 14:52:26.282201    6217 config.go:182] Loaded profile config "newest-cni-614000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 14:52:26.286262    6217 out.go:177] * The control-plane node newest-cni-614000 host is not running: state=Stopped
	I0307 14:52:26.290233    6217 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-614000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-614000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-614000 -n newest-cni-614000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-614000 -n newest-cni-614000: exit status 7 (31.96725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-614000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-614000 -n newest-cni-614000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-614000 -n newest-cni-614000: exit status 7 (32.357916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-614000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (159/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.28.4/json-events 51.12
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.23
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.22
21 TestDownloadOnly/v1.29.0-rc.2/json-events 49.1
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.24
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.35
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 202.22
38 TestAddons/parallel/Registry 17.98
40 TestAddons/parallel/InspektorGadget 10.22
41 TestAddons/parallel/MetricsServer 5.25
44 TestAddons/parallel/CSI 44.43
45 TestAddons/parallel/Headlamp 11.57
46 TestAddons/parallel/CloudSpanner 5.17
47 TestAddons/parallel/LocalPath 40.82
48 TestAddons/parallel/NvidiaDevicePlugin 5.16
49 TestAddons/parallel/Yakd 5
52 TestAddons/serial/GCPAuth/Namespaces 0.07
53 TestAddons/StoppedEnableDisable 12.4
61 TestHyperKitDriverInstallOrUpdate 9.15
64 TestErrorSpam/setup 31.99
65 TestErrorSpam/start 0.35
66 TestErrorSpam/status 0.24
67 TestErrorSpam/pause 0.63
68 TestErrorSpam/unpause 0.57
69 TestErrorSpam/stop 55.27
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 48.54
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 33.38
76 TestFunctional/serial/KubeContext 0.03
77 TestFunctional/serial/KubectlGetPods 0.04
80 TestFunctional/serial/CacheCmd/cache/add_remote 9.07
81 TestFunctional/serial/CacheCmd/cache/add_local 1.22
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
83 TestFunctional/serial/CacheCmd/cache/list 0.04
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.17
86 TestFunctional/serial/CacheCmd/cache/delete 0.08
87 TestFunctional/serial/MinikubeKubectlCmd 0.53
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.65
89 TestFunctional/serial/ExtraConfig 34.68
90 TestFunctional/serial/ComponentHealth 0.04
91 TestFunctional/serial/LogsCmd 0.66
92 TestFunctional/serial/LogsFileCmd 0.62
93 TestFunctional/serial/InvalidService 4.25
95 TestFunctional/parallel/ConfigCmd 0.24
96 TestFunctional/parallel/DashboardCmd 9.16
97 TestFunctional/parallel/DryRun 0.25
98 TestFunctional/parallel/InternationalLanguage 0.12
99 TestFunctional/parallel/StatusCmd 0.25
104 TestFunctional/parallel/AddonsCmd 0.13
105 TestFunctional/parallel/PersistentVolumeClaim 25.79
107 TestFunctional/parallel/SSHCmd 0.15
108 TestFunctional/parallel/CpCmd 0.48
110 TestFunctional/parallel/FileSync 0.07
111 TestFunctional/parallel/CertSync 0.47
115 TestFunctional/parallel/NodeLabels 0.04
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
119 TestFunctional/parallel/License 1.65
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
131 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
132 TestFunctional/parallel/ServiceCmd/List 0.29
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
135 TestFunctional/parallel/ServiceCmd/Format 0.1
136 TestFunctional/parallel/ServiceCmd/URL 0.11
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
138 TestFunctional/parallel/ProfileCmd/profile_list 0.15
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
140 TestFunctional/parallel/MountCmd/any-port 9.08
141 TestFunctional/parallel/MountCmd/specific-port 0.83
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.51
143 TestFunctional/parallel/Version/short 0.04
144 TestFunctional/parallel/Version/components 0.19
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.09
149 TestFunctional/parallel/ImageCommands/ImageBuild 6.05
150 TestFunctional/parallel/ImageCommands/Setup 35.74
151 TestFunctional/parallel/DockerEnv/bash 0.37
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
155 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.12
156 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.47
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.57
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.48
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
162 TestFunctional/delete_addon-resizer_images 0.11
163 TestFunctional/delete_my-image_image 0.04
164 TestFunctional/delete_minikube_cached_images 0.04
168 TestMutliControlPlane/serial/StartCluster 438.37
169 TestMutliControlPlane/serial/DeployApp 8.41
170 TestMutliControlPlane/serial/PingHostFromPods 0.8
171 TestMutliControlPlane/serial/AddWorkerNode 81.7
172 TestMutliControlPlane/serial/NodeLabels 0.13
173 TestMutliControlPlane/serial/HAppyAfterClusterStart 2.72
174 TestMutliControlPlane/serial/CopyFile 4.57
178 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 152.08
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 2.91
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.33
213 TestMainNoArgs 0.04
260 TestStoppedBinaryUpgrade/Setup 5
272 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
277 TestNoKubernetes/serial/ProfileList 31.43
278 TestNoKubernetes/serial/Stop 3.56
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
288 TestStoppedBinaryUpgrade/MinikubeLogs 0.68
297 TestStartStop/group/old-k8s-version/serial/Stop 2.02
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
302 TestStartStop/group/no-preload/serial/Stop 3.83
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
319 TestStartStop/group/embed-certs/serial/Stop 3.31
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
324 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.15
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
339 TestStartStop/group/newest-cni/serial/Stop 3.69
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-510000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-510000: exit status 85 (99.810667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-510000 | jenkins | v1.32.0 | 07 Mar 24 13:46 PST |          |
	|         | -p download-only-510000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 13:46:27
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 13:46:27.547614    1598 out.go:291] Setting OutFile to fd 1 ...
	I0307 13:46:27.547748    1598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:46:27.547751    1598 out.go:304] Setting ErrFile to fd 2...
	I0307 13:46:27.547753    1598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:46:27.547891    1598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	W0307 13:46:27.547968    1598 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18320-1153/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18320-1153/.minikube/config/config.json: no such file or directory
	I0307 13:46:27.549218    1598 out.go:298] Setting JSON to true
	I0307 13:46:27.566597    1598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":944,"bootTime":1709847043,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 13:46:27.566659    1598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 13:46:27.577159    1598 out.go:97] [download-only-510000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 13:46:27.581097    1598 out.go:169] MINIKUBE_LOCATION=18320
	I0307 13:46:27.577323    1598 notify.go:220] Checking for updates...
	W0307 13:46:27.577370    1598 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball: no such file or directory
	I0307 13:46:27.598234    1598 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 13:46:27.601210    1598 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 13:46:27.604153    1598 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 13:46:27.608213    1598 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	W0307 13:46:27.614162    1598 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 13:46:27.614394    1598 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 13:46:27.619207    1598 out.go:97] Using the qemu2 driver based on user configuration
	I0307 13:46:27.619228    1598 start.go:297] selected driver: qemu2
	I0307 13:46:27.619244    1598 start.go:901] validating driver "qemu2" against <nil>
	I0307 13:46:27.619322    1598 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 13:46:27.621294    1598 out.go:169] Automatically selected the socket_vmnet network
	I0307 13:46:27.626906    1598 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0307 13:46:27.627022    1598 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 13:46:27.627117    1598 cni.go:84] Creating CNI manager for ""
	I0307 13:46:27.627135    1598 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 13:46:27.627187    1598 start.go:340] cluster config:
	{Name:download-only-510000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-510000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 13:46:27.633932    1598 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 13:46:27.637272    1598 out.go:97] Downloading VM boot image ...
	I0307 13:46:27.637294    1598 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso
	I0307 13:46:46.368678    1598 out.go:97] Starting "download-only-510000" primary control-plane node in "download-only-510000" cluster
	I0307 13:46:46.368728    1598 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 13:46:46.656445    1598 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 13:46:46.656524    1598 cache.go:56] Caching tarball of preloaded images
	I0307 13:46:46.657246    1598 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 13:46:46.662023    1598 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0307 13:46:46.662051    1598 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 13:46:47.271686    1598 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 13:47:07.206577    1598 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 13:47:07.206760    1598 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 13:47:07.904369    1598 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 13:47:07.904551    1598 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/download-only-510000/config.json ...
	I0307 13:47:07.904567    1598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/download-only-510000/config.json: {Name:mk7fd9b47ffcd4f27544635ead98cbb488fc03e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:47:07.904805    1598 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 13:47:07.905002    1598 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0307 13:47:08.608084    1598 out.go:169] 
	W0307 13:47:08.614137    1598 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18320-1153/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108e570a0 0x108e570a0 0x108e570a0 0x108e570a0 0x108e570a0 0x108e570a0 0x108e570a0] Decompressors:map[bz2:0x14000898df0 gz:0x14000898df8 tar:0x14000898da0 tar.bz2:0x14000898db0 tar.gz:0x14000898dc0 tar.xz:0x14000898dd0 tar.zst:0x14000898de0 tbz2:0x14000898db0 tgz:0x14000898dc0 txz:0x14000898dd0 tzst:0x14000898de0 xz:0x14000898e00 zip:0x14000898e10 zst:0x14000898e08] Getters:map[file:0x140020d2570 http:0x140005fe230 https:0x140005fe280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0307 13:47:08.614161    1598 out_reason.go:110] 
	W0307 13:47:08.622061    1598 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 13:47:08.626108    1598 out.go:169] 
	
	
	* The control-plane node download-only-510000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-510000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-510000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (51.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-973000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-973000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (51.122407416s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (51.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-973000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-973000: exit status 85 (80.61025ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-510000 | jenkins | v1.32.0 | 07 Mar 24 13:46 PST |                     |
	|         | -p download-only-510000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 07 Mar 24 13:47 PST | 07 Mar 24 13:47 PST |
	| delete  | -p download-only-510000        | download-only-510000 | jenkins | v1.32.0 | 07 Mar 24 13:47 PST | 07 Mar 24 13:47 PST |
	| start   | -o=json --download-only        | download-only-973000 | jenkins | v1.32.0 | 07 Mar 24 13:47 PST |                     |
	|         | -p download-only-973000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 13:47:09
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 13:47:09.299276    1639 out.go:291] Setting OutFile to fd 1 ...
	I0307 13:47:09.299413    1639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:47:09.299417    1639 out.go:304] Setting ErrFile to fd 2...
	I0307 13:47:09.299419    1639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:47:09.299541    1639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 13:47:09.300588    1639 out.go:298] Setting JSON to true
	I0307 13:47:09.316866    1639 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":986,"bootTime":1709847043,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 13:47:09.316925    1639 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 13:47:09.322076    1639 out.go:97] [download-only-973000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 13:47:09.326045    1639 out.go:169] MINIKUBE_LOCATION=18320
	I0307 13:47:09.322157    1639 notify.go:220] Checking for updates...
	I0307 13:47:09.332005    1639 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 13:47:09.335069    1639 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 13:47:09.338113    1639 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 13:47:09.339595    1639 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	W0307 13:47:09.346065    1639 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 13:47:09.346269    1639 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 13:47:09.349019    1639 out.go:97] Using the qemu2 driver based on user configuration
	I0307 13:47:09.349027    1639 start.go:297] selected driver: qemu2
	I0307 13:47:09.349030    1639 start.go:901] validating driver "qemu2" against <nil>
	I0307 13:47:09.349070    1639 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 13:47:09.351985    1639 out.go:169] Automatically selected the socket_vmnet network
	I0307 13:47:09.357044    1639 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0307 13:47:09.357153    1639 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 13:47:09.357196    1639 cni.go:84] Creating CNI manager for ""
	I0307 13:47:09.357202    1639 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 13:47:09.357213    1639 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 13:47:09.357247    1639 start.go:340] cluster config:
	{Name:download-only-973000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-973000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 13:47:09.361393    1639 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 13:47:09.364082    1639 out.go:97] Starting "download-only-973000" primary control-plane node in "download-only-973000" cluster
	I0307 13:47:09.364093    1639 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 13:47:10.038902    1639 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 13:47:10.038993    1639 cache.go:56] Caching tarball of preloaded images
	I0307 13:47:10.039725    1639 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 13:47:10.045266    1639 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0307 13:47:10.045288    1639 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0307 13:47:10.637372    1639 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 13:47:26.991111    1639 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0307 13:47:26.991295    1639 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0307 13:47:27.572758    1639 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 13:47:27.572973    1639 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/download-only-973000/config.json ...
	I0307 13:47:27.572989    1639 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/download-only-973000/config.json: {Name:mkffc19d2d190fc137a9d49749cbc9ba3e9512c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:47:27.573231    1639 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 13:47:27.573353    1639 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/darwin/arm64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-973000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-973000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-973000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (49.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-304000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-304000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 : (49.104519875s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (49.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-304000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-304000: exit status 85 (80.160375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-510000 | jenkins | v1.32.0 | 07 Mar 24 13:46 PST |                     |
	|         | -p download-only-510000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 13:47 PST | 07 Mar 24 13:47 PST |
	| delete  | -p download-only-510000           | download-only-510000 | jenkins | v1.32.0 | 07 Mar 24 13:47 PST | 07 Mar 24 13:47 PST |
	| start   | -o=json --download-only           | download-only-973000 | jenkins | v1.32.0 | 07 Mar 24 13:47 PST |                     |
	|         | -p download-only-973000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 13:48 PST | 07 Mar 24 13:48 PST |
	| delete  | -p download-only-973000           | download-only-973000 | jenkins | v1.32.0 | 07 Mar 24 13:48 PST | 07 Mar 24 13:48 PST |
	| start   | -o=json --download-only           | download-only-304000 | jenkins | v1.32.0 | 07 Mar 24 13:48 PST |                     |
	|         | -p download-only-304000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 13:48:00
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 13:48:00.957708    1676 out.go:291] Setting OutFile to fd 1 ...
	I0307 13:48:00.957816    1676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:48:00.957819    1676 out.go:304] Setting ErrFile to fd 2...
	I0307 13:48:00.957821    1676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:48:00.957931    1676 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 13:48:00.958984    1676 out.go:298] Setting JSON to true
	I0307 13:48:00.975247    1676 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1037,"bootTime":1709847043,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 13:48:00.975309    1676 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 13:48:00.979525    1676 out.go:97] [download-only-304000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 13:48:00.983278    1676 out.go:169] MINIKUBE_LOCATION=18320
	I0307 13:48:00.979646    1676 notify.go:220] Checking for updates...
	I0307 13:48:00.991436    1676 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 13:48:00.994433    1676 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 13:48:00.997422    1676 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 13:48:01.000461    1676 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	W0307 13:48:01.005430    1676 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 13:48:01.005629    1676 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 13:48:01.008408    1676 out.go:97] Using the qemu2 driver based on user configuration
	I0307 13:48:01.008417    1676 start.go:297] selected driver: qemu2
	I0307 13:48:01.008422    1676 start.go:901] validating driver "qemu2" against <nil>
	I0307 13:48:01.008470    1676 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 13:48:01.011440    1676 out.go:169] Automatically selected the socket_vmnet network
	I0307 13:48:01.016455    1676 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0307 13:48:01.016546    1676 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 13:48:01.016584    1676 cni.go:84] Creating CNI manager for ""
	I0307 13:48:01.016593    1676 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 13:48:01.016603    1676 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 13:48:01.016638    1676 start.go:340] cluster config:
	{Name:download-only-304000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-304000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 13:48:01.020742    1676 iso.go:125] acquiring lock: {Name:mk9f11fe8095f5aa8805d47835226b5c64c27e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 13:48:01.023424    1676 out.go:97] Starting "download-only-304000" primary control-plane node in "download-only-304000" cluster
	I0307 13:48:01.023435    1676 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 13:48:02.174542    1676 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 13:48:02.174620    1676 cache.go:56] Caching tarball of preloaded images
	I0307 13:48:02.175509    1676 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 13:48:02.181239    1676 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0307 13:48:02.181271    1676 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0307 13:48:02.779667    1676 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 13:48:19.634031    1676 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0307 13:48:19.634199    1676 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0307 13:48:20.188676    1676 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0307 13:48:20.188858    1676 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/download-only-304000/config.json ...
	I0307 13:48:20.188876    1676 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/download-only-304000/config.json: {Name:mk116fdae28075a2b6566ea82a20cd8883b332e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 13:48:20.189131    1676 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 13:48:20.189256    1676 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18320-1153/.minikube/cache/darwin/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-304000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-304000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-304000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-773000 --alsologtostderr --binary-mirror http://127.0.0.1:49328 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-773000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-773000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-263000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-263000: exit status 85 (61.412041ms)

                                                
                                                
-- stdout --
	* Profile "addons-263000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-263000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-263000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-263000: exit status 85 (57.436209ms)

                                                
                                                
-- stdout --
	* Profile "addons-263000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-263000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (202.22s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-263000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-darwin-arm64 start -p addons-263000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m22.22458025s)
--- PASS: TestAddons/Setup (202.22s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 7.496583ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-zp2kw" [23c4da1e-1600-4bf6-bd46-b55a29d69d26] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003840875s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rcw2g" [631fd6f8-a701-47ff-ba2e-767f14a10490] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0079395s
addons_test.go:340: (dbg) Run:  kubectl --context addons-263000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-263000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-263000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.617202459s)
addons_test.go:359: (dbg) Run:  out/minikube-darwin-arm64 -p addons-263000 ip
2024/03/07 13:52:31 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-darwin-arm64 -p addons-263000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.98s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gwzhw" [657c5a25-4ff2-48fa-804f-8657cd666ff1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004173875s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-263000
addons_test.go:841: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-263000: (5.217808042s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.222292ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-bq5l7" [3cbd03d0-e96e-4a05-b38e-f0a9aa3b5fb7] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004012875s
addons_test.go:415: (dbg) Run:  kubectl --context addons-263000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-arm64 -p addons-263000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 7.832333ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-263000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-263000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4ab958bd-375f-438f-8280-6a319296101a] Pending
helpers_test.go:344: "task-pv-pod" [4ab958bd-375f-438f-8280-6a319296101a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4ab958bd-375f-438f-8280-6a319296101a] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003800625s
addons_test.go:584: (dbg) Run:  kubectl --context addons-263000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-263000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-263000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-263000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-263000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-263000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-263000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3bb980c0-19ae-4cac-b6c2-70f11d53f1b6] Pending
helpers_test.go:344: "task-pv-pod-restore" [3bb980c0-19ae-4cac-b6c2-70f11d53f1b6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3bb980c0-19ae-4cac-b6c2-70f11d53f1b6] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0039045s
addons_test.go:626: (dbg) Run:  kubectl --context addons-263000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-263000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-263000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-arm64 -p addons-263000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-arm64 -p addons-263000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.081145875s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-arm64 -p addons-263000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.43s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-263000 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-v68g9" [08f99681-a093-40d3-b068-ef50b7f87c35] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-v68g9" [08f99681-a093-40d3-b068-ef50b7f87c35] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003804625s
--- PASS: TestAddons/parallel/Headlamp (11.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-rbsmt" [e689ddb2-bc4c-4ad1-8cae-c2d3669511b7] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004097458s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-263000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.82s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-263000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-263000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-263000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bcd97aae-fdd6-4ae8-be8a-96db11f09648] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bcd97aae-fdd6-4ae8-be8a-96db11f09648] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bcd97aae-fdd6-4ae8-be8a-96db11f09648] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004177667s
addons_test.go:891: (dbg) Run:  kubectl --context addons-263000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-arm64 -p addons-263000 ssh "cat /opt/local-path-provisioner/pvc-8066f6fb-2a0d-414c-ad66-0e0e6c420276_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-263000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-263000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-arm64 -p addons-263000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-arm64 -p addons-263000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.355061541s)
--- PASS: TestAddons/parallel/LocalPath (40.82s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lkgxm" [f2dc7d70-d24c-4ff8-b079-b1dd168beae6] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004102541s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-263000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-f6gpj" [8e6242fd-50cc-4301-abd1-72320b9f23ef] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00393225s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-263000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-263000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-263000
addons_test.go:172: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-263000: (12.202986792s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-263000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-263000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-263000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.15s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.15s)

                                                
                                    
x
+
TestErrorSpam/setup (31.99s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-472000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-472000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 --driver=qemu2 : (31.986831583s)
--- PASS: TestErrorSpam/setup (31.99s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 pause
--- PASS: TestErrorSpam/pause (0.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 unpause
--- PASS: TestErrorSpam/unpause (0.57s)

                                                
                                    
x
+
TestErrorSpam/stop (55.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 stop: (3.192650042s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 stop: (26.0384505s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 stop: (26.035121791s)
--- PASS: TestErrorSpam/stop (55.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18320-1153/.minikube/files/etc/test/nested/copy/1596/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-836000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-836000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (48.541888459s)
--- PASS: TestFunctional/serial/StartWithProxy (48.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-836000 --alsologtostderr -v=8
E0307 13:57:13.877775    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 13:57:13.884773    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 13:57:13.896823    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 13:57:13.918859    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 13:57:13.960929    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 13:57:14.042999    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 13:57:14.205039    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 13:57:14.527098    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 13:57:15.169217    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 13:57:16.451298    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 13:57:19.013378    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-836000 --alsologtostderr -v=8: (33.378011833s)
functional_test.go:659: soft start took 33.378429s for "functional-836000" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.38s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-836000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 cache add registry.k8s.io/pause:3.1
E0307 13:57:24.135440    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-836000 cache add registry.k8s.io/pause:3.1: (3.462013208s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-836000 cache add registry.k8s.io/pause:3.3: (3.3554555s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-836000 cache add registry.k8s.io/pause:latest: (2.252067084s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-836000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1884916844/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 cache add minikube-local-cache-test:functional-836000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 cache delete minikube-local-cache-test:functional-836000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-836000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-836000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (77.416958ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 cache reload
E0307 13:57:34.377029    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-836000 cache reload: (1.936303s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 kubectl -- --context functional-836000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-836000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.68s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-836000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0307 13:57:54.858763    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-836000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.679170709s)
functional_test.go:757: restart took 34.679271459s for "functional-836000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.68s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-836000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd4234749345/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-836000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-836000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-836000: exit status 115 (110.569542ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32570 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-836000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-836000 delete -f testdata/invalidsvc.yaml: (1.038912667s)
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-836000 config get cpus: exit status 14 (32.864708ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-836000 config get cpus: exit status 14 (32.464208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-836000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-836000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2404: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.16s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-836000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-836000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.744208ms)

                                                
                                                
-- stdout --
	* [functional-836000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 13:59:03.477500    2391 out.go:291] Setting OutFile to fd 1 ...
	I0307 13:59:03.477645    2391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:59:03.477648    2391 out.go:304] Setting ErrFile to fd 2...
	I0307 13:59:03.477651    2391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:59:03.477773    2391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 13:59:03.478857    2391 out.go:298] Setting JSON to false
	I0307 13:59:03.495334    2391 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1700,"bootTime":1709847043,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 13:59:03.495394    2391 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 13:59:03.500442    2391 out.go:177] * [functional-836000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 13:59:03.507417    2391 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 13:59:03.511429    2391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 13:59:03.507489    2391 notify.go:220] Checking for updates...
	I0307 13:59:03.517341    2391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 13:59:03.520398    2391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 13:59:03.523406    2391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 13:59:03.526353    2391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 13:59:03.529676    2391 config.go:182] Loaded profile config "functional-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 13:59:03.529921    2391 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 13:59:03.534388    2391 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 13:59:03.541413    2391 start.go:297] selected driver: qemu2
	I0307 13:59:03.541418    2391 start.go:901] validating driver "qemu2" against &{Name:functional-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 13:59:03.541465    2391 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 13:59:03.547385    2391 out.go:177] 
	W0307 13:59:03.551367    2391 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0307 13:59:03.555405    2391 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-836000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-836000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-836000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (119.701792ms)

                                                
                                                
-- stdout --
	* [functional-836000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 13:59:03.354699    2387 out.go:291] Setting OutFile to fd 1 ...
	I0307 13:59:03.354818    2387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:59:03.354826    2387 out.go:304] Setting ErrFile to fd 2...
	I0307 13:59:03.354829    2387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 13:59:03.354973    2387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
	I0307 13:59:03.356341    2387 out.go:298] Setting JSON to false
	I0307 13:59:03.375113    2387 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1700,"bootTime":1709847043,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0307 13:59:03.375203    2387 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 13:59:03.381462    2387 out.go:177] * [functional-836000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0307 13:59:03.389436    2387 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 13:59:03.394355    2387 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	I0307 13:59:03.389467    2387 notify.go:220] Checking for updates...
	I0307 13:59:03.400388    2387 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 13:59:03.403330    2387 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 13:59:03.406398    2387 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	I0307 13:59:03.409415    2387 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 13:59:03.411188    2387 config.go:182] Loaded profile config "functional-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 13:59:03.411453    2387 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 13:59:03.415426    2387 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0307 13:59:03.422276    2387 start.go:297] selected driver: qemu2
	I0307 13:59:03.422281    2387 start.go:901] validating driver "qemu2" against &{Name:functional-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 13:59:03.422328    2387 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 13:59:03.429404    2387 out.go:177] 
	W0307 13:59:03.433493    2387 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0307 13:59:03.437344    2387 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [43847a32-0629-4074-9fe5-4127d92739ed] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004439584s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-836000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-836000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-836000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-836000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [196b48f9-4c9d-48c0-aabf-55850a2b60ce] Pending
helpers_test.go:344: "sp-pod" [196b48f9-4c9d-48c0-aabf-55850a2b60ce] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [196b48f9-4c9d-48c0-aabf-55850a2b60ce] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.0036065s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-836000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-836000 delete -f testdata/storage-provisioner/pod.yaml
E0307 13:58:35.820066    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-836000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a163b63e-7d08-4c4b-8e46-d003401b0537] Pending
helpers_test.go:344: "sp-pod" [a163b63e-7d08-4c4b-8e46-d003401b0537] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a163b63e-7d08-4c4b-8e46-d003401b0537] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003822417s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-836000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.79s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh -n functional-836000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 cp functional-836000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1597621700/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh -n functional-836000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh -n functional-836000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1596/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "sudo cat /etc/test/nested/copy/1596/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1596.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "sudo cat /etc/ssl/certs/1596.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1596.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "sudo cat /usr/share/ca-certificates/1596.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15962.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "sudo cat /etc/ssl/certs/15962.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15962.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "sudo cat /usr/share/ca-certificates/15962.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-836000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-836000 ssh "sudo systemctl is-active crio": exit status 1 (69.390459ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.653510042s)
--- PASS: TestFunctional/parallel/License (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-836000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-836000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-836000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-836000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2226: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-836000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-836000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7266a4f4-290d-4441-9485-c496c889ceca] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7266a4f4-290d-4441-9485-c496c889ceca] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003872s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-836000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.241.157 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-836000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-836000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-836000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-2htjx" [18ac08df-803f-4ddd-94c1-c13993ed3531] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-2htjx" [18ac08df-803f-4ddd-94c1-c13993ed3531] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004348s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 service list -o json
functional_test.go:1490: Took "286.899875ms" to run "out/minikube-darwin-arm64 -p functional-836000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:31800
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:31800
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "115.352875ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "38.613292ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "117.089542ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "37.536958ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-836000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port996150160/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709848730653769000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port996150160/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709848730653769000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port996150160/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709848730653769000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port996150160/001/test-1709848730653769000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (68.271042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar  7 21:58 created-by-test
-rw-r--r-- 1 docker docker 24 Mar  7 21:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar  7 21:58 test-1709848730653769000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh cat /mount-9p/test-1709848730653769000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-836000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2] Pending
helpers_test.go:344: "busybox-mount" [e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e2fa99f7-89f2-4ed9-b7d2-d7f938efa6e2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.003568792s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-836000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-836000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port996150160/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-836000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3997279170/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.992584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-836000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3997279170/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-836000 ssh "sudo umount -f /mount-9p": exit status 1 (65.552667ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-836000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-836000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3997279170/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-836000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2577347136/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-836000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2577347136/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-836000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2577347136/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T" /mount1: exit status 1 (92.380125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T" /mount3: exit status 1 (61.716917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T" /mount3: exit status 1 (67.082042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-836000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-836000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2577347136/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-836000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2577347136/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-836000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2577347136/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-836000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-836000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-836000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-836000 image ls --format short --alsologtostderr:
I0307 13:59:58.231992    2573 out.go:291] Setting OutFile to fd 1 ...
I0307 13:59:58.232167    2573 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 13:59:58.232170    2573 out.go:304] Setting ErrFile to fd 2...
I0307 13:59:58.232173    2573 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 13:59:58.232323    2573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
I0307 13:59:58.232809    2573 config.go:182] Loaded profile config "functional-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 13:59:58.232871    2573 config.go:182] Loaded profile config "functional-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 13:59:58.233863    2573 ssh_runner.go:195] Run: systemctl --version
I0307 13:59:58.233875    2573 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/functional-836000/id_rsa Username:docker}
I0307 13:59:58.262606    2573 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
W0307 13:59:58.272894    2573 root.go:91] failed to log command end to audit: failed to find a log row with id equals to e3b3d86c-259b-4140-ac92-ca77f499772e
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-836000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | alpine            | be5e6f23a9904 | 43.6MB |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/google-containers/addon-resizer      | functional-836000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-836000 | 2d980aac0a72a | 30B    |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/nginx                     | latest            | 760b7cbba31e1 | 192MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-836000 image ls --format table --alsologtostderr:
I0307 13:59:58.391336    2582 out.go:291] Setting OutFile to fd 1 ...
I0307 13:59:58.391476    2582 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 13:59:58.391479    2582 out.go:304] Setting ErrFile to fd 2...
I0307 13:59:58.391482    2582 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 13:59:58.391605    2582 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
I0307 13:59:58.392027    2582 config.go:182] Loaded profile config "functional-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 13:59:58.392086    2582 config.go:182] Loaded profile config "functional-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 13:59:58.392930    2582 ssh_runner.go:195] Run: systemctl --version
I0307 13:59:58.392941    2582 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/functional-836000/id_rsa Username:docker}
I0307 13:59:58.421678    2582 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-836000 image ls --format json --alsologtostderr:
[{"id":"2d980aac0a72a97048f322b20a3c2b71aba6b5c151a321421a3f6471b2d0ec4d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-836000"],"size":"30"},{"id":"760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9
181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-836000"],"size":"32900000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":[]
,"repoTags":["docker.io/library/nginx:alpine"],"size":"43600000"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pau
se:3.3"],"size":"484000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-836000 image ls --format json --alsologtostderr:
I0307 13:59:58.313242    2578 out.go:291] Setting OutFile to fd 1 ...
I0307 13:59:58.313354    2578 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 13:59:58.313358    2578 out.go:304] Setting ErrFile to fd 2...
I0307 13:59:58.313361    2578 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 13:59:58.313484    2578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
I0307 13:59:58.313884    2578 config.go:182] Loaded profile config "functional-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 13:59:58.313950    2578 config.go:182] Loaded profile config "functional-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 13:59:58.314855    2578 ssh_runner.go:195] Run: systemctl --version
I0307 13:59:58.314865    2578 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/functional-836000/id_rsa Username:docker}
I0307 13:59:58.342845    2578 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-836000 image ls --format yaml --alsologtostderr:
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2d980aac0a72a97048f322b20a3c2b71aba6b5c151a321421a3f6471b2d0ec4d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-836000
size: "30"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43600000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-836000
size: "32900000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-836000 image ls --format yaml --alsologtostderr:
I0307 13:59:58.232066    2574 out.go:291] Setting OutFile to fd 1 ...
I0307 13:59:58.232205    2574 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 13:59:58.232208    2574 out.go:304] Setting ErrFile to fd 2...
I0307 13:59:58.232211    2574 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 13:59:58.232395    2574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
I0307 13:59:58.232886    2574 config.go:182] Loaded profile config "functional-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 13:59:58.232949    2574 config.go:182] Loaded profile config "functional-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 13:59:58.234386    2574 ssh_runner.go:195] Run: systemctl --version
I0307 13:59:58.234394    2574 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/functional-836000/id_rsa Username:docker}
I0307 13:59:58.262343    2574 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-836000 ssh pgrep buildkitd: exit status 1 (67.7605ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image build -t localhost/my-image:functional-836000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-836000 image build -t localhost/my-image:functional-836000 testdata/build --alsologtostderr: (5.903482084s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-836000 image build -t localhost/my-image:functional-836000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 2166799e10ea
Removing intermediate container 2166799e10ea
---> 05ec7246999b
Step 3/3 : ADD content.txt /
---> 000b434dec5c
Successfully built 000b434dec5c
Successfully tagged localhost/my-image:functional-836000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-836000 image build -t localhost/my-image:functional-836000 testdata/build --alsologtostderr:
I0307 13:59:58.378156    2581 out.go:291] Setting OutFile to fd 1 ...
I0307 13:59:58.378436    2581 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 13:59:58.378444    2581 out.go:304] Setting ErrFile to fd 2...
I0307 13:59:58.378447    2581 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 13:59:58.378569    2581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18320-1153/.minikube/bin
I0307 13:59:58.379429    2581 config.go:182] Loaded profile config "functional-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 13:59:58.380207    2581 config.go:182] Loaded profile config "functional-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 13:59:58.381441    2581 ssh_runner.go:195] Run: systemctl --version
I0307 13:59:58.381455    2581 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18320-1153/.minikube/machines/functional-836000/id_rsa Username:docker}
I0307 13:59:58.410056    2581 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3841308654.tar
I0307 13:59:58.410114    2581 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0307 13:59:58.413981    2581 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3841308654.tar
I0307 13:59:58.415729    2581 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3841308654.tar: stat -c "%s %y" /var/lib/minikube/build/build.3841308654.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3841308654.tar': No such file or directory
I0307 13:59:58.415743    2581 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3841308654.tar --> /var/lib/minikube/build/build.3841308654.tar (3072 bytes)
I0307 13:59:58.426315    2581 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3841308654
I0307 13:59:58.430641    2581 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3841308654 -xf /var/lib/minikube/build/build.3841308654.tar
I0307 13:59:58.435345    2581 docker.go:360] Building image: /var/lib/minikube/build/build.3841308654
I0307 13:59:58.435395    2581 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-836000 /var/lib/minikube/build/build.3841308654
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0307 14:00:04.235688    2581 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-836000 /var/lib/minikube/build/build.3841308654: (5.8004085s)
I0307 14:00:04.235762    2581 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3841308654
I0307 14:00:04.239345    2581 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3841308654.tar
I0307 14:00:04.242457    2581 build_images.go:207] Built localhost/my-image:functional-836000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3841308654.tar
I0307 14:00:04.242474    2581 build_images.go:123] succeeded building to: functional-836000
I0307 14:00:04.242478    2581 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (35.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/03/07 13:59:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (35.695956834s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-836000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (35.74s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-836000 docker-env) && out/minikube-darwin-arm64 status -p functional-836000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-836000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image load --daemon gcr.io/google-containers/addon-resizer:functional-836000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-836000 image load --daemon gcr.io/google-containers/addon-resizer:functional-836000 --alsologtostderr: (2.043777584s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image load --daemon gcr.io/google-containers/addon-resizer:functional-836000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-836000 image load --daemon gcr.io/google-containers/addon-resizer:functional-836000 --alsologtostderr: (1.389883917s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.54051725s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-836000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image load --daemon gcr.io/google-containers/addon-resizer:functional-836000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-836000 image load --daemon gcr.io/google-containers/addon-resizer:functional-836000 --alsologtostderr: (1.900446833s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image save gcr.io/google-containers/addon-resizer:functional-836000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image rm gcr.io/google-containers/addon-resizer:functional-836000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-836000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-836000 image save --daemon gcr.io/google-containers/addon-resizer:functional-836000 --alsologtostderr
E0307 13:59:57.739378    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-836000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-836000
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-836000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-836000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (438.37s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-827000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0307 14:02:13.866524    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 14:02:41.573350    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
E0307 14:03:17.375445    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:03:17.381256    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:03:17.393360    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:03:17.413988    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:03:17.455585    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:03:17.537706    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:03:17.699201    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:03:18.021290    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:03:18.662536    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:03:19.944690    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:03:22.506790    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:03:27.628782    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:03:37.870695    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:03:58.352384    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:04:39.313650    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:06:01.233886    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:07:13.859498    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/addons-263000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-827000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (7m18.169811833s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (438.37s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-827000 -- rollout status deployment/busybox: (6.922894459s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-2bhws -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-mdvgb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-rbgvt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-2bhws -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-mdvgb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-rbgvt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-2bhws -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-mdvgb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-rbgvt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (0.8s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-2bhws -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-2bhws -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-mdvgb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-mdvgb -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-rbgvt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-827000 -- exec busybox-5b5d89c9d6-rbgvt -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (0.80s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (81.7s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-827000 -v=7 --alsologtostderr
E0307 14:08:17.368556    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
E0307 14:08:45.072250    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-827000 -v=7 --alsologtostderr: (1m21.460423792s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (81.70s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-827000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (2.72s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2.723788209s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (2.72s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (4.57s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp testdata/cp-test.txt ha-827000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMutliControlPlaneserialCopyFile3044311618/001/cp-test_ha-827000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000:/home/docker/cp-test.txt ha-827000-m02:/home/docker/cp-test_ha-827000_ha-827000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m02 "sudo cat /home/docker/cp-test_ha-827000_ha-827000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000:/home/docker/cp-test.txt ha-827000-m03:/home/docker/cp-test_ha-827000_ha-827000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m03 "sudo cat /home/docker/cp-test_ha-827000_ha-827000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000:/home/docker/cp-test.txt ha-827000-m04:/home/docker/cp-test_ha-827000_ha-827000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m04 "sudo cat /home/docker/cp-test_ha-827000_ha-827000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp testdata/cp-test.txt ha-827000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMutliControlPlaneserialCopyFile3044311618/001/cp-test_ha-827000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000-m02:/home/docker/cp-test.txt ha-827000:/home/docker/cp-test_ha-827000-m02_ha-827000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000 "sudo cat /home/docker/cp-test_ha-827000-m02_ha-827000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000-m02:/home/docker/cp-test.txt ha-827000-m03:/home/docker/cp-test_ha-827000-m02_ha-827000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m03 "sudo cat /home/docker/cp-test_ha-827000-m02_ha-827000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000-m02:/home/docker/cp-test.txt ha-827000-m04:/home/docker/cp-test_ha-827000-m02_ha-827000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m04 "sudo cat /home/docker/cp-test_ha-827000-m02_ha-827000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp testdata/cp-test.txt ha-827000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMutliControlPlaneserialCopyFile3044311618/001/cp-test_ha-827000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000-m03:/home/docker/cp-test.txt ha-827000:/home/docker/cp-test_ha-827000-m03_ha-827000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000 "sudo cat /home/docker/cp-test_ha-827000-m03_ha-827000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000-m03:/home/docker/cp-test.txt ha-827000-m02:/home/docker/cp-test_ha-827000-m03_ha-827000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m02 "sudo cat /home/docker/cp-test_ha-827000-m03_ha-827000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000-m03:/home/docker/cp-test.txt ha-827000-m04:/home/docker/cp-test_ha-827000-m03_ha-827000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m04 "sudo cat /home/docker/cp-test_ha-827000-m03_ha-827000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp testdata/cp-test.txt ha-827000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMutliControlPlaneserialCopyFile3044311618/001/cp-test_ha-827000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000-m04:/home/docker/cp-test.txt ha-827000:/home/docker/cp-test_ha-827000-m04_ha-827000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000 "sudo cat /home/docker/cp-test_ha-827000-m04_ha-827000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000-m04:/home/docker/cp-test.txt ha-827000-m02:/home/docker/cp-test_ha-827000-m04_ha-827000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m02 "sudo cat /home/docker/cp-test_ha-827000-m04_ha-827000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 cp ha-827000-m04:/home/docker/cp-test.txt ha-827000-m03:/home/docker/cp-test_ha-827000-m04_ha-827000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-827000 ssh -n ha-827000-m03 "sudo cat /home/docker/cp-test_ha-827000-m04_ha-827000-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (4.57s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (152.08s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0307 14:23:17.347896    1596 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18320-1153/.minikube/profiles/functional-836000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m32.079678667s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (152.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-328000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-328000 --output=json --user=testUser: (2.913610083s)
--- PASS: TestJSONOutput/stop/Command (2.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-266000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-266000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.81625ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b1ca4a51-3737-4687-a1ec-c6b00758d1db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-266000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"17923047-2439-4835-89ca-d3c9ad325a5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18320"}}
	{"specversion":"1.0","id":"073b8cf7-4225-448a-8cd9-c0670e8aeb3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig"}}
	{"specversion":"1.0","id":"bfe8cbc1-5848-4e5d-9570-f03a43b58673","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d82fe3ab-7786-4969-a302-c81c1d40b78f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"68fd76b7-b33d-45ff-b425-83d9b34a22ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube"}}
	{"specversion":"1.0","id":"096d3f27-9c41-419f-b623-3a262806fd64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"afd45472-7170-4bff-96ad-7717761ff158","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-266000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-266000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.642625ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-902000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18320-1153/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18320-1153/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-902000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-902000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.749208ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-902000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-902000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.732906458s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.696462s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-902000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-902000: (3.563168625s)
--- PASS: TestNoKubernetes/serial/Stop (3.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-902000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-902000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (46.755875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-902000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-902000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-491000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-935000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-935000 --alsologtostderr -v=3: (2.0154835s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (53.971875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-935000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-303000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-303000 --alsologtostderr -v=3: (3.833228042s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-303000 -n no-preload-303000: exit status 7 (58.636459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-303000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-616000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-616000 --alsologtostderr -v=3: (3.313622375s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (59.861083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-616000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-671000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-671000 --alsologtostderr -v=3: (3.1529815s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-671000 -n default-k8s-diff-port-671000: exit status 7 (58.895417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-671000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-614000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-614000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-614000 --alsologtostderr -v=3: (3.686442625s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-614000 -n newest-cni-614000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-614000 -n newest-cni-614000: exit status 7 (60.688166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-614000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-581000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-581000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-581000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-581000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-581000"

                                                
                                                
----------------------- debugLogs end: cilium-581000 [took: 2.238010375s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-581000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-581000
--- SKIP: TestNetworkPlugins/group/cilium (2.47s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-209000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-209000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard