Test Report: QEMU_macOS 18333

                    
                      35bb0a6fdb2e8bad0653ad48b3d817d653ac2a3a:2024-03-07:33467
                    
                

Test fail (98/281)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 41.83
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.2
39 TestAddons/parallel/Ingress 34.05
54 TestCertOptions 10.22
55 TestCertExpiration 195.22
56 TestDockerFlags 10.01
57 TestForceSystemdFlag 10.16
58 TestForceSystemdEnv 10.21
103 TestFunctional/parallel/ServiceCmdConnect 39.67
175 TestMutliControlPlane/serial/StopSecondaryNode 214.12
176 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 104.51
177 TestMutliControlPlane/serial/RestartSecondaryNode 209.02
179 TestMutliControlPlane/serial/RestartClusterKeepsNodes 234.38
180 TestMutliControlPlane/serial/DeleteSecondaryNode 0.1
181 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.08
182 TestMutliControlPlane/serial/StopCluster 202.09
183 TestMutliControlPlane/serial/RestartCluster 5.26
184 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.11
185 TestMutliControlPlane/serial/AddSecondaryNode 0.08
189 TestImageBuild/serial/Setup 10.09
192 TestJSONOutput/start/Command 9.75
198 TestJSONOutput/pause/Command 0.08
204 TestJSONOutput/unpause/Command 0.05
221 TestMinikubeProfile 10.31
224 TestMountStart/serial/StartWithMountFirst 11.05
227 TestMultiNode/serial/FreshStart2Nodes 9.85
228 TestMultiNode/serial/DeployApp2Nodes 90.42
229 TestMultiNode/serial/PingHostFrom2Pods 0.09
230 TestMultiNode/serial/AddNode 0.08
231 TestMultiNode/serial/MultiNodeLabels 0.13
232 TestMultiNode/serial/ProfileList 0.1
233 TestMultiNode/serial/CopyFile 0.06
234 TestMultiNode/serial/StopNode 0.14
235 TestMultiNode/serial/StartAfterStop 47.13
236 TestMultiNode/serial/RestartKeepsNodes 7.32
237 TestMultiNode/serial/DeleteNode 0.1
238 TestMultiNode/serial/StopMultiNode 3.31
239 TestMultiNode/serial/RestartMultiNode 5.26
240 TestMultiNode/serial/ValidateNameConflict 20.06
244 TestPreload 10.13
246 TestScheduledStopUnix 10.07
247 TestSkaffold 16.62
250 TestRunningBinaryUpgrade 634.65
252 TestKubernetesUpgrade 19.04
265 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.99
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.38
268 TestStoppedBinaryUpgrade/Upgrade 580.33
270 TestPause/serial/Start 10.07
280 TestNoKubernetes/serial/StartWithK8s 9.91
281 TestNoKubernetes/serial/StartWithStopK8s 5.92
282 TestNoKubernetes/serial/Start 6.36
286 TestNoKubernetes/serial/StartNoArgs 6.43
288 TestNetworkPlugins/group/auto/Start 10.06
289 TestNetworkPlugins/group/calico/Start 9.69
290 TestNetworkPlugins/group/custom-flannel/Start 9.89
291 TestNetworkPlugins/group/false/Start 9.88
292 TestNetworkPlugins/group/kindnet/Start 9.83
293 TestNetworkPlugins/group/flannel/Start 9.72
294 TestNetworkPlugins/group/enable-default-cni/Start 9.81
296 TestNetworkPlugins/group/bridge/Start 10.11
297 TestNetworkPlugins/group/kubenet/Start 9.81
299 TestStartStop/group/old-k8s-version/serial/FirstStart 10.14
301 TestStartStop/group/no-preload/serial/FirstStart 11.83
302 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
305 TestStartStop/group/no-preload/serial/DeployApp 0.09
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
309 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
311 TestStartStop/group/no-preload/serial/SecondStart 5.7
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
314 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
315 TestStartStop/group/old-k8s-version/serial/Pause 0.1
317 TestStartStop/group/embed-certs/serial/FirstStart 9.95
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
321 TestStartStop/group/no-preload/serial/Pause 0.1
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.8
324 TestStartStop/group/embed-certs/serial/DeployApp 0.09
325 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
331 TestStartStop/group/embed-certs/serial/SecondStart 5.26
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.55
334 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
335 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
336 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
337 TestStartStop/group/embed-certs/serial/Pause 0.1
339 TestStartStop/group/newest-cni/serial/FirstStart 10.05
340 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
342 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
343 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
348 TestStartStop/group/newest-cni/serial/SecondStart 5.26
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
352 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (41.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-410000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-410000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (41.832332958s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b2e05692-ff92-49c7-a2ca-da87451b6e5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-410000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2af5a5a2-8a6a-4423-888f-4eab9be2b6f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18333"}}
	{"specversion":"1.0","id":"f8eada41-c2a9-4bbc-b66a-6df9b7dafa92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig"}}
	{"specversion":"1.0","id":"534a3439-439b-4216-ba52-edb7ca068cd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"295476c4-d88a-4360-8724-3cc18c01d0cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7dbe1bb8-9e82-4463-8921-913fbe446726","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube"}}
	{"specversion":"1.0","id":"ad26b13d-7131-421b-929b-8437af5703f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"a3031ea3-efe8-4113-85f0-6a5b4ce72ead","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"311bfff8-2bfa-4558-acf2-ea0320fa530e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2812dcef-61df-4f1f-9879-ee5879b7f2ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"89278c49-d0d7-448b-81b7-b55bdcbb0eda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-410000\" primary control-plane node in \"download-only-410000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b2b3eb7-f6a3-4b48-9447-757c4913b431","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ea766c26-c7a2-4e87-8e30-421b7bfd5b24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106b430a0 0x106b430a0 0x106b430a0 0x106b430a0 0x106b430a0 0x106b430a0 0x106b430a0] Decompressors:map[bz2:0x140006d6910 gz:0x140006d6918 tar:0x140006d68c0 tar.bz2:0x140006d68d0 tar.gz:0x140006d68e0 tar.xz:0x140006d68f0 tar.zst:0x140006d6900 tbz2:0x140006d68d0 tgz:0x14
0006d68e0 txz:0x140006d68f0 tzst:0x140006d6900 xz:0x140006d6920 zip:0x140006d6930 zst:0x140006d6928] Getters:map[file:0x1400211e570 http:0x140009862d0 https:0x14000986320] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"a6b10c4e-252a-4025-bd7a-ff86ab169a10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:55:38.899590    1622 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:55:38.899725    1622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:55:38.899728    1622 out.go:304] Setting ErrFile to fd 2...
	I0307 18:55:38.899731    1622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:55:38.899853    1622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	W0307 18:55:38.899933    1622 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18333-1199/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18333-1199/.minikube/config/config.json: no such file or directory
	I0307 18:55:38.901171    1622 out.go:298] Setting JSON to true
	I0307 18:55:38.918688    1622 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1510,"bootTime":1709865028,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 18:55:38.918756    1622 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 18:55:38.925119    1622 out.go:97] [download-only-410000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 18:55:38.928093    1622 out.go:169] MINIKUBE_LOCATION=18333
	I0307 18:55:38.925256    1622 notify.go:220] Checking for updates...
	W0307 18:55:38.925279    1622 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball: no such file or directory
	I0307 18:55:38.936073    1622 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 18:55:38.937668    1622 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 18:55:38.941129    1622 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:55:38.944098    1622 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	W0307 18:55:38.950041    1622 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 18:55:38.950269    1622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:55:38.955061    1622 out.go:97] Using the qemu2 driver based on user configuration
	I0307 18:55:38.955081    1622 start.go:297] selected driver: qemu2
	I0307 18:55:38.955096    1622 start.go:901] validating driver "qemu2" against <nil>
	I0307 18:55:38.955167    1622 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 18:55:38.959067    1622 out.go:169] Automatically selected the socket_vmnet network
	I0307 18:55:38.964613    1622 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0307 18:55:38.964742    1622 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 18:55:38.964842    1622 cni.go:84] Creating CNI manager for ""
	I0307 18:55:38.964859    1622 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 18:55:38.964907    1622 start.go:340] cluster config:
	{Name:download-only-410000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-410000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:55:38.970526    1622 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:55:38.975119    1622 out.go:97] Downloading VM boot image ...
	I0307 18:55:38.975172    1622 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso
	I0307 18:55:57.562837    1622 out.go:97] Starting "download-only-410000" primary control-plane node in "download-only-410000" cluster
	I0307 18:55:57.562884    1622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 18:55:57.835782    1622 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 18:55:57.835874    1622 cache.go:56] Caching tarball of preloaded images
	I0307 18:55:57.836577    1622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 18:55:57.842564    1622 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0307 18:55:57.842588    1622 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 18:55:58.431610    1622 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 18:56:19.086670    1622 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 18:56:19.086835    1622 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 18:56:19.788399    1622 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 18:56:19.788588    1622 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/download-only-410000/config.json ...
	I0307 18:56:19.788604    1622 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/download-only-410000/config.json: {Name:mke5b03ac0a37a6ec34b8b6fd54e5c17259e0351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:56:19.788848    1622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 18:56:19.789024    1622 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0307 18:56:20.651764    1622 out.go:169] 
	W0307 18:56:20.656833    1622 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106b430a0 0x106b430a0 0x106b430a0 0x106b430a0 0x106b430a0 0x106b430a0 0x106b430a0] Decompressors:map[bz2:0x140006d6910 gz:0x140006d6918 tar:0x140006d68c0 tar.bz2:0x140006d68d0 tar.gz:0x140006d68e0 tar.xz:0x140006d68f0 tar.zst:0x140006d6900 tbz2:0x140006d68d0 tgz:0x140006d68e0 txz:0x140006d68f0 tzst:0x140006d6900 xz:0x140006d6920 zip:0x140006d6930 zst:0x140006d6928] Getters:map[file:0x1400211e570 http:0x140009862d0 https:0x14000986320] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0307 18:56:20.656861    1622 out_reason.go:110] 
	W0307 18:56:20.664788    1622 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 18:56:20.668745    1622 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-410000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (41.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.2s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-959000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-959000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.002799917s)

                                                
                                                
-- stdout --
	* [offline-docker-959000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-959000" primary control-plane node in "offline-docker-959000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-959000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:36:09.316394    4239 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:36:09.316516    4239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:36:09.316520    4239 out.go:304] Setting ErrFile to fd 2...
	I0307 19:36:09.316522    4239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:36:09.316660    4239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:36:09.317760    4239 out.go:298] Setting JSON to false
	I0307 19:36:09.335208    4239 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3941,"bootTime":1709865028,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:36:09.335275    4239 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:36:09.340768    4239 out.go:177] * [offline-docker-959000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:36:09.348676    4239 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:36:09.351671    4239 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:36:09.348703    4239 notify.go:220] Checking for updates...
	I0307 19:36:09.357563    4239 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:36:09.360697    4239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:36:09.361830    4239 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:36:09.364591    4239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:36:09.367971    4239 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:36:09.368030    4239 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:36:09.371468    4239 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:36:09.378648    4239 start.go:297] selected driver: qemu2
	I0307 19:36:09.378658    4239 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:36:09.378665    4239 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:36:09.380534    4239 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:36:09.383736    4239 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:36:09.386760    4239 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:36:09.386798    4239 cni.go:84] Creating CNI manager for ""
	I0307 19:36:09.386807    4239 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:36:09.386817    4239 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 19:36:09.386859    4239 start.go:340] cluster config:
	{Name:offline-docker-959000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-959000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:36:09.391241    4239 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:36:09.397559    4239 out.go:177] * Starting "offline-docker-959000" primary control-plane node in "offline-docker-959000" cluster
	I0307 19:36:09.401553    4239 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:36:09.401577    4239 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:36:09.401591    4239 cache.go:56] Caching tarball of preloaded images
	I0307 19:36:09.401651    4239 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:36:09.401656    4239 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:36:09.401721    4239 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/offline-docker-959000/config.json ...
	I0307 19:36:09.401730    4239 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/offline-docker-959000/config.json: {Name:mk074409d7b5866434341887d1021786a2fc285b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:36:09.402018    4239 start.go:360] acquireMachinesLock for offline-docker-959000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:36:09.402055    4239 start.go:364] duration metric: took 28µs to acquireMachinesLock for "offline-docker-959000"
	I0307 19:36:09.402066    4239 start.go:93] Provisioning new machine with config: &{Name:offline-docker-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-959000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:36:09.402104    4239 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:36:09.409603    4239 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 19:36:09.425213    4239 start.go:159] libmachine.API.Create for "offline-docker-959000" (driver="qemu2")
	I0307 19:36:09.425249    4239 client.go:168] LocalClient.Create starting
	I0307 19:36:09.425318    4239 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:36:09.425346    4239 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:09.425356    4239 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:09.425402    4239 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:36:09.425424    4239 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:09.425432    4239 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:09.425764    4239 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:36:09.564754    4239 main.go:141] libmachine: Creating SSH key...
	I0307 19:36:09.760598    4239 main.go:141] libmachine: Creating Disk image...
	I0307 19:36:09.760610    4239 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:36:09.760829    4239 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/disk.qcow2
	I0307 19:36:09.774432    4239 main.go:141] libmachine: STDOUT: 
	I0307 19:36:09.774460    4239 main.go:141] libmachine: STDERR: 
	I0307 19:36:09.774534    4239 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/disk.qcow2 +20000M
	I0307 19:36:09.788549    4239 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:36:09.788573    4239 main.go:141] libmachine: STDERR: 
	I0307 19:36:09.788587    4239 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/disk.qcow2
	I0307 19:36:09.788591    4239 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:36:09.788622    4239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:a2:f0:fd:69:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/disk.qcow2
	I0307 19:36:09.790235    4239 main.go:141] libmachine: STDOUT: 
	I0307 19:36:09.790256    4239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:36:09.790276    4239 client.go:171] duration metric: took 365.033834ms to LocalClient.Create
	I0307 19:36:11.792273    4239 start.go:128] duration metric: took 2.390258417s to createHost
	I0307 19:36:11.792316    4239 start.go:83] releasing machines lock for "offline-docker-959000", held for 2.390354s
	W0307 19:36:11.792325    4239 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:36:11.799531    4239 out.go:177] * Deleting "offline-docker-959000" in qemu2 ...
	W0307 19:36:11.810196    4239 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:36:11.810207    4239 start.go:728] Will try again in 5 seconds ...
	I0307 19:36:16.812186    4239 start.go:360] acquireMachinesLock for offline-docker-959000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:36:16.812666    4239 start.go:364] duration metric: took 380µs to acquireMachinesLock for "offline-docker-959000"
	I0307 19:36:16.812803    4239 start.go:93] Provisioning new machine with config: &{Name:offline-docker-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-959000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:36:16.813095    4239 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:36:16.822683    4239 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 19:36:16.872414    4239 start.go:159] libmachine.API.Create for "offline-docker-959000" (driver="qemu2")
	I0307 19:36:16.872465    4239 client.go:168] LocalClient.Create starting
	I0307 19:36:16.872568    4239 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:36:16.872628    4239 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:16.872648    4239 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:16.872708    4239 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:36:16.872752    4239 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:16.872767    4239 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:16.873311    4239 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:36:17.022960    4239 main.go:141] libmachine: Creating SSH key...
	I0307 19:36:17.211117    4239 main.go:141] libmachine: Creating Disk image...
	I0307 19:36:17.211124    4239 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:36:17.211281    4239 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/disk.qcow2
	I0307 19:36:17.224081    4239 main.go:141] libmachine: STDOUT: 
	I0307 19:36:17.224104    4239 main.go:141] libmachine: STDERR: 
	I0307 19:36:17.224163    4239 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/disk.qcow2 +20000M
	I0307 19:36:17.234741    4239 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:36:17.234763    4239 main.go:141] libmachine: STDERR: 
	I0307 19:36:17.234776    4239 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/disk.qcow2
	I0307 19:36:17.234785    4239 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:36:17.234824    4239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:cb:f7:fe:4b:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/offline-docker-959000/disk.qcow2
	I0307 19:36:17.236354    4239 main.go:141] libmachine: STDOUT: 
	I0307 19:36:17.236372    4239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:36:17.236384    4239 client.go:171] duration metric: took 363.928459ms to LocalClient.Create
	I0307 19:36:19.238491    4239 start.go:128] duration metric: took 2.425452084s to createHost
	I0307 19:36:19.238627    4239 start.go:83] releasing machines lock for "offline-docker-959000", held for 2.425964959s
	W0307 19:36:19.238940    4239 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-959000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-959000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:36:19.253525    4239 out.go:177] 
	W0307 19:36:19.257397    4239 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:36:19.257428    4239 out.go:239] * 
	* 
	W0307 19:36:19.260473    4239 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:36:19.273404    4239 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-959000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-07 19:36:19.289001 -0800 PST m=+2440.613400209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-959000 -n offline-docker-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-959000 -n offline-docker-959000: exit status 7 (69.841708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-959000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-959000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-959000
--- FAIL: TestOffline (10.20s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (34.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-935000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-935000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-935000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ea9f33ec-9080-42c6-9ae0-c591b67c9602] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ea9f33ec-9080-42c6-9ae0-c591b67c9602] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003785292s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-935000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-935000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-935000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.2: exit status 1 (15.035632625s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-darwin-arm64 -p addons-935000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 -p addons-935000 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-darwin-arm64 -p addons-935000 addons disable ingress --alsologtostderr -v=1: (7.214362916s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-935000 -n addons-935000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-935000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 07 Mar 24 18:57 PST | 07 Mar 24 18:57 PST |
	| delete  | -p download-only-878000                                                                     | download-only-878000 | jenkins | v1.32.0 | 07 Mar 24 18:57 PST | 07 Mar 24 18:57 PST |
	| delete  | -p download-only-410000                                                                     | download-only-410000 | jenkins | v1.32.0 | 07 Mar 24 18:57 PST | 07 Mar 24 18:57 PST |
	| delete  | -p download-only-277000                                                                     | download-only-277000 | jenkins | v1.32.0 | 07 Mar 24 18:57 PST | 07 Mar 24 18:57 PST |
	| delete  | -p download-only-878000                                                                     | download-only-878000 | jenkins | v1.32.0 | 07 Mar 24 18:57 PST | 07 Mar 24 18:57 PST |
	| start   | --download-only -p                                                                          | binary-mirror-653000 | jenkins | v1.32.0 | 07 Mar 24 18:57 PST |                     |
	|         | binary-mirror-653000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49331                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-653000                                                                     | binary-mirror-653000 | jenkins | v1.32.0 | 07 Mar 24 18:57 PST | 07 Mar 24 18:57 PST |
	| addons  | enable dashboard -p                                                                         | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 18:57 PST |                     |
	|         | addons-935000                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 18:57 PST |                     |
	|         | addons-935000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-935000 --wait=true                                                                | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 18:57 PST | 07 Mar 24 19:00 PST |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                                                |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                                           |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| ip      | addons-935000 ip                                                                            | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:01 PST | 07 Mar 24 19:01 PST |
	| addons  | addons-935000 addons disable                                                                | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:01 PST | 07 Mar 24 19:01 PST |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-935000 addons                                                                        | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:01 PST | 07 Mar 24 19:01 PST |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:01 PST | 07 Mar 24 19:01 PST |
	|         | -p addons-935000                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-935000 ssh cat                                                                       | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:01 PST | 07 Mar 24 19:01 PST |
	|         | /opt/local-path-provisioner/pvc-3f1ef09e-da34-419e-8eee-222697714314_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-935000 addons disable                                                                | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:01 PST | 07 Mar 24 19:02 PST |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-935000 addons                                                                        | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:01 PST | 07 Mar 24 19:01 PST |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-935000 addons                                                                        | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:01 PST | 07 Mar 24 19:01 PST |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:01 PST | 07 Mar 24 19:01 PST |
	|         | addons-935000                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:01 PST | 07 Mar 24 19:01 PST |
	|         | -p addons-935000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-935000 ssh curl -s                                                                   | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:02 PST | 07 Mar 24 19:02 PST |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-935000 ip                                                                            | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:02 PST | 07 Mar 24 19:02 PST |
	| addons  | disable inspektor-gadget -p                                                                 | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:02 PST | 07 Mar 24 19:02 PST |
	|         | addons-935000                                                                               |                      |         |         |                     |                     |
	| addons  | addons-935000 addons disable                                                                | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:02 PST | 07 Mar 24 19:02 PST |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-935000 addons disable                                                                | addons-935000        | jenkins | v1.32.0 | 07 Mar 24 19:02 PST | 07 Mar 24 19:02 PST |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 18:57:19
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:57:19.091641    1790 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:57:19.091779    1790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:57:19.091782    1790 out.go:304] Setting ErrFile to fd 2...
	I0307 18:57:19.091785    1790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:57:19.091899    1790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 18:57:19.092935    1790 out.go:298] Setting JSON to false
	I0307 18:57:19.108974    1790 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1611,"bootTime":1709865028,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 18:57:19.109085    1790 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 18:57:19.113737    1790 out.go:177] * [addons-935000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 18:57:19.120643    1790 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 18:57:19.124639    1790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 18:57:19.120726    1790 notify.go:220] Checking for updates...
	I0307 18:57:19.130590    1790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 18:57:19.133640    1790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:57:19.136638    1790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 18:57:19.139625    1790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:57:19.142755    1790 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:57:19.146587    1790 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 18:57:19.153659    1790 start.go:297] selected driver: qemu2
	I0307 18:57:19.153666    1790 start.go:901] validating driver "qemu2" against <nil>
	I0307 18:57:19.153674    1790 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:57:19.155908    1790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 18:57:19.159621    1790 out.go:177] * Automatically selected the socket_vmnet network
	I0307 18:57:19.162691    1790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 18:57:19.162725    1790 cni.go:84] Creating CNI manager for ""
	I0307 18:57:19.162732    1790 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 18:57:19.162741    1790 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 18:57:19.162767    1790 start.go:340] cluster config:
	{Name:addons-935000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:57:19.167145    1790 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:57:19.175605    1790 out.go:177] * Starting "addons-935000" primary control-plane node in "addons-935000" cluster
	I0307 18:57:19.179635    1790 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 18:57:19.179650    1790 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 18:57:19.179665    1790 cache.go:56] Caching tarball of preloaded images
	I0307 18:57:19.179728    1790 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 18:57:19.179735    1790 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 18:57:19.180015    1790 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/config.json ...
	I0307 18:57:19.180027    1790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/config.json: {Name:mkf7a1deae7825ad9261e7e2ebccbf477ff2dfd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:57:19.180264    1790 start.go:360] acquireMachinesLock for addons-935000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 18:57:19.180431    1790 start.go:364] duration metric: took 160.5µs to acquireMachinesLock for "addons-935000"
	I0307 18:57:19.180442    1790 start.go:93] Provisioning new machine with config: &{Name:addons-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 18:57:19.180479    1790 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 18:57:19.185621    1790 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0307 18:57:20.130301    1790 start.go:159] libmachine.API.Create for "addons-935000" (driver="qemu2")
	I0307 18:57:20.130353    1790 client.go:168] LocalClient.Create starting
	I0307 18:57:20.130605    1790 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 18:57:20.282588    1790 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 18:57:20.440845    1790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 18:57:21.136748    1790 main.go:141] libmachine: Creating SSH key...
	I0307 18:57:21.232835    1790 main.go:141] libmachine: Creating Disk image...
	I0307 18:57:21.232845    1790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 18:57:21.233067    1790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/disk.qcow2
	I0307 18:57:21.321739    1790 main.go:141] libmachine: STDOUT: 
	I0307 18:57:21.321775    1790 main.go:141] libmachine: STDERR: 
	I0307 18:57:21.321853    1790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/disk.qcow2 +20000M
	I0307 18:57:21.335108    1790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 18:57:21.335127    1790 main.go:141] libmachine: STDERR: 
	I0307 18:57:21.335141    1790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/disk.qcow2
	I0307 18:57:21.335147    1790 main.go:141] libmachine: Starting QEMU VM...
	I0307 18:57:21.335179    1790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:c1:52:ae:2a:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/disk.qcow2
	I0307 18:57:21.387886    1790 main.go:141] libmachine: STDOUT: 
	I0307 18:57:21.387924    1790 main.go:141] libmachine: STDERR: 
	I0307 18:57:21.387928    1790 main.go:141] libmachine: Attempt 0
	I0307 18:57:21.387942    1790 main.go:141] libmachine: Searching for e6:c1:52:ae:2a:b in /var/db/dhcpd_leases ...
	I0307 18:57:21.387994    1790 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 18:57:21.388012    1790 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65ebcf8f}
	I0307 18:57:23.390158    1790 main.go:141] libmachine: Attempt 1
	I0307 18:57:23.390238    1790 main.go:141] libmachine: Searching for e6:c1:52:ae:2a:b in /var/db/dhcpd_leases ...
	I0307 18:57:23.390525    1790 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 18:57:23.390575    1790 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65ebcf8f}
	I0307 18:57:25.392803    1790 main.go:141] libmachine: Attempt 2
	I0307 18:57:25.392927    1790 main.go:141] libmachine: Searching for e6:c1:52:ae:2a:b in /var/db/dhcpd_leases ...
	I0307 18:57:25.393209    1790 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 18:57:25.393269    1790 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65ebcf8f}
	I0307 18:57:27.395397    1790 main.go:141] libmachine: Attempt 3
	I0307 18:57:27.395440    1790 main.go:141] libmachine: Searching for e6:c1:52:ae:2a:b in /var/db/dhcpd_leases ...
	I0307 18:57:27.395496    1790 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 18:57:27.395520    1790 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65ebcf8f}
	I0307 18:57:29.397529    1790 main.go:141] libmachine: Attempt 4
	I0307 18:57:29.397537    1790 main.go:141] libmachine: Searching for e6:c1:52:ae:2a:b in /var/db/dhcpd_leases ...
	I0307 18:57:29.397567    1790 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 18:57:29.397574    1790 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65ebcf8f}
	I0307 18:57:31.399556    1790 main.go:141] libmachine: Attempt 5
	I0307 18:57:31.399569    1790 main.go:141] libmachine: Searching for e6:c1:52:ae:2a:b in /var/db/dhcpd_leases ...
	I0307 18:57:31.399597    1790 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 18:57:31.399602    1790 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65ebcf8f}
	I0307 18:57:33.400521    1790 main.go:141] libmachine: Attempt 6
	I0307 18:57:33.400547    1790 main.go:141] libmachine: Searching for e6:c1:52:ae:2a:b in /var/db/dhcpd_leases ...
	I0307 18:57:33.400610    1790 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 18:57:33.400622    1790 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65ebcf8f}
	I0307 18:57:35.402744    1790 main.go:141] libmachine: Attempt 7
	I0307 18:57:35.402844    1790 main.go:141] libmachine: Searching for e6:c1:52:ae:2a:b in /var/db/dhcpd_leases ...
	I0307 18:57:35.403159    1790 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0307 18:57:35.403212    1790 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e6:c1:52:ae:2a:b ID:1,e6:c1:52:ae:2a:b Lease:0x65ebd01e}
	I0307 18:57:35.403226    1790 main.go:141] libmachine: Found match: e6:c1:52:ae:2a:b
	I0307 18:57:35.403258    1790 main.go:141] libmachine: IP: 192.168.105.2
	I0307 18:57:35.403276    1790 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0307 18:57:38.426784    1790 machine.go:94] provisionDockerMachine start ...
	I0307 18:57:38.428184    1790 main.go:141] libmachine: Using SSH client type: native
	I0307 18:57:38.428626    1790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104815a30] 0x104818290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 18:57:38.428641    1790 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 18:57:38.501058    1790 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 18:57:38.501089    1790 buildroot.go:166] provisioning hostname "addons-935000"
	I0307 18:57:38.501196    1790 main.go:141] libmachine: Using SSH client type: native
	I0307 18:57:38.501422    1790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104815a30] 0x104818290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 18:57:38.501432    1790 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-935000 && echo "addons-935000" | sudo tee /etc/hostname
	I0307 18:57:38.567625    1790 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-935000
	
	I0307 18:57:38.567711    1790 main.go:141] libmachine: Using SSH client type: native
	I0307 18:57:38.567887    1790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104815a30] 0x104818290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 18:57:38.567901    1790 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-935000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-935000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-935000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 18:57:38.623454    1790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 18:57:38.623470    1790 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18333-1199/.minikube CaCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18333-1199/.minikube}
	I0307 18:57:38.623487    1790 buildroot.go:174] setting up certificates
	I0307 18:57:38.623494    1790 provision.go:84] configureAuth start
	I0307 18:57:38.623498    1790 provision.go:143] copyHostCerts
	I0307 18:57:38.623655    1790 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/cert.pem (1123 bytes)
	I0307 18:57:38.623950    1790 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/key.pem (1675 bytes)
	I0307 18:57:38.624085    1790 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.pem (1082 bytes)
	I0307 18:57:38.624195    1790 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem org=jenkins.addons-935000 san=[127.0.0.1 192.168.105.2 addons-935000 localhost minikube]
	I0307 18:57:38.832665    1790 provision.go:177] copyRemoteCerts
	I0307 18:57:38.832745    1790 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 18:57:38.832769    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:57:38.861700    1790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 18:57:38.873894    1790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 18:57:38.882263    1790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0307 18:57:38.890421    1790 provision.go:87] duration metric: took 266.92475ms to configureAuth
	I0307 18:57:38.890431    1790 buildroot.go:189] setting minikube options for container-runtime
	I0307 18:57:38.890539    1790 config.go:182] Loaded profile config "addons-935000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 18:57:38.890581    1790 main.go:141] libmachine: Using SSH client type: native
	I0307 18:57:38.890663    1790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104815a30] 0x104818290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 18:57:38.890667    1790 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 18:57:38.938956    1790 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 18:57:38.938963    1790 buildroot.go:70] root file system type: tmpfs
	I0307 18:57:38.939013    1790 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 18:57:38.939058    1790 main.go:141] libmachine: Using SSH client type: native
	I0307 18:57:38.939158    1790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104815a30] 0x104818290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 18:57:38.939190    1790 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 18:57:38.991301    1790 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 18:57:38.991341    1790 main.go:141] libmachine: Using SSH client type: native
	I0307 18:57:38.991428    1790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104815a30] 0x104818290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 18:57:38.991437    1790 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 18:57:39.310200    1790 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 18:57:39.310213    1790 machine.go:97] duration metric: took 883.429417ms to provisionDockerMachine
	I0307 18:57:39.310218    1790 client.go:171] duration metric: took 19.180403s to LocalClient.Create
	I0307 18:57:39.310234    1790 start.go:167] duration metric: took 19.180485834s to libmachine.API.Create "addons-935000"
	I0307 18:57:39.310239    1790 start.go:293] postStartSetup for "addons-935000" (driver="qemu2")
	I0307 18:57:39.310244    1790 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 18:57:39.310310    1790 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 18:57:39.310320    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:57:39.337655    1790 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 18:57:39.339139    1790 info.go:137] Remote host: Buildroot 2023.02.9
	I0307 18:57:39.339151    1790 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18333-1199/.minikube/addons for local assets ...
	I0307 18:57:39.339223    1790 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18333-1199/.minikube/files for local assets ...
	I0307 18:57:39.339250    1790 start.go:296] duration metric: took 29.009958ms for postStartSetup
	I0307 18:57:39.339596    1790 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/config.json ...
	I0307 18:57:39.339772    1790 start.go:128] duration metric: took 20.159862708s to createHost
	I0307 18:57:39.339795    1790 main.go:141] libmachine: Using SSH client type: native
	I0307 18:57:39.339877    1790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104815a30] 0x104818290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 18:57:39.339881    1790 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 18:57:39.387257    1790 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709866659.871702044
	
	I0307 18:57:39.387268    1790 fix.go:216] guest clock: 1709866659.871702044
	I0307 18:57:39.387272    1790 fix.go:229] Guest: 2024-03-07 18:57:39.871702044 -0800 PST Remote: 2024-03-07 18:57:39.339775 -0800 PST m=+20.269304667 (delta=531.927044ms)
	I0307 18:57:39.387282    1790 fix.go:200] guest clock delta is within tolerance: 531.927044ms
	I0307 18:57:39.387285    1790 start.go:83] releasing machines lock for "addons-935000", held for 20.207423166s
	I0307 18:57:39.387588    1790 ssh_runner.go:195] Run: cat /version.json
	I0307 18:57:39.387594    1790 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 18:57:39.387598    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:57:39.387608    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:57:39.544312    1790 ssh_runner.go:195] Run: systemctl --version
	I0307 18:57:39.547679    1790 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 18:57:39.550399    1790 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 18:57:39.550436    1790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 18:57:39.558736    1790 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 18:57:39.558745    1790 start.go:494] detecting cgroup driver to use...
	I0307 18:57:39.558953    1790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:57:39.567193    1790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 18:57:39.571753    1790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 18:57:39.576026    1790 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 18:57:39.576056    1790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 18:57:39.580074    1790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:57:39.583865    1790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 18:57:39.587276    1790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:57:39.590753    1790 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 18:57:39.594792    1790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 18:57:39.598653    1790 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 18:57:39.602198    1790 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 18:57:39.605798    1790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:57:39.672913    1790 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 18:57:39.683952    1790 start.go:494] detecting cgroup driver to use...
	I0307 18:57:39.684033    1790 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 18:57:39.690618    1790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 18:57:39.695996    1790 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 18:57:39.702454    1790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 18:57:39.707803    1790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 18:57:39.712641    1790 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 18:57:39.753533    1790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 18:57:39.759289    1790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:57:39.765595    1790 ssh_runner.go:195] Run: which cri-dockerd
	I0307 18:57:39.766987    1790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 18:57:39.770256    1790 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 18:57:39.776192    1790 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 18:57:39.840634    1790 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 18:57:39.905168    1790 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 18:57:39.905229    1790 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 18:57:39.916565    1790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:57:40.006534    1790 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 18:57:41.159545    1790 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.153026875s)
	I0307 18:57:41.159618    1790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 18:57:41.165005    1790 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 18:57:41.171608    1790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 18:57:41.176787    1790 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 18:57:41.258988    1790 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 18:57:41.326884    1790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:57:41.389177    1790 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 18:57:41.396599    1790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 18:57:41.401757    1790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:57:41.482463    1790 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 18:57:41.505270    1790 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 18:57:41.505367    1790 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 18:57:41.507500    1790 start.go:562] Will wait 60s for crictl version
	I0307 18:57:41.507542    1790 ssh_runner.go:195] Run: which crictl
	I0307 18:57:41.509226    1790 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 18:57:41.534535    1790 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0307 18:57:41.534606    1790 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 18:57:41.546672    1790 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 18:57:41.562532    1790 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0307 18:57:41.562635    1790 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0307 18:57:41.564249    1790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:57:41.568554    1790 kubeadm.go:877] updating cluster {Name:addons-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:addons-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 18:57:41.568597    1790 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 18:57:41.568638    1790 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 18:57:41.573973    1790 docker.go:685] Got preloaded images: 
	I0307 18:57:41.573982    1790 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0307 18:57:41.574025    1790 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 18:57:41.577921    1790 ssh_runner.go:195] Run: which lz4
	I0307 18:57:41.579398    1790 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0307 18:57:41.580776    1790 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 18:57:41.580787    1790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (357941720 bytes)
	I0307 18:57:42.854667    1790 docker.go:649] duration metric: took 1.275329959s to copy over tarball
	I0307 18:57:42.854721    1790 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 18:57:43.928741    1790 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.074036542s)
	I0307 18:57:43.928755    1790 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 18:57:43.944969    1790 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 18:57:43.948798    1790 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0307 18:57:43.954778    1790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:57:44.045883    1790 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 18:57:46.690622    1790 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.644796084s)
	I0307 18:57:46.690708    1790 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 18:57:46.696634    1790 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 18:57:46.696645    1790 cache_images.go:84] Images are preloaded, skipping loading
	I0307 18:57:46.696650    1790 kubeadm.go:928] updating node { 192.168.105.2 8443 v1.28.4 docker true true} ...
	I0307 18:57:46.696713    1790 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-935000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 18:57:46.696770    1790 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 18:57:46.704713    1790 cni.go:84] Creating CNI manager for ""
	I0307 18:57:46.704724    1790 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 18:57:46.704730    1790 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 18:57:46.704739    1790 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-935000 NodeName:addons-935000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 18:57:46.704810    1790 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-935000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 18:57:46.704873    1790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 18:57:46.709009    1790 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 18:57:46.709040    1790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 18:57:46.712723    1790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0307 18:57:46.718499    1790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 18:57:46.724585    1790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0307 18:57:46.730372    1790 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0307 18:57:46.731952    1790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:57:46.735943    1790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:57:46.801778    1790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 18:57:46.812046    1790 certs.go:68] Setting up /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000 for IP: 192.168.105.2
	I0307 18:57:46.812055    1790 certs.go:194] generating shared ca certs ...
	I0307 18:57:46.812064    1790 certs.go:226] acquiring lock for ca certs: {Name:mkeed6c4d5ba27d3ef2bc04c52c43819ca546cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:57:46.812248    1790 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.key
	I0307 18:57:46.862109    1790 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt ...
	I0307 18:57:46.862118    1790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt: {Name:mkad7d06a1a8d4d54cfbfa81ef7207f028167a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:57:46.862358    1790 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.key ...
	I0307 18:57:46.862364    1790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.key: {Name:mk6cd15575053d4b7ab710d9788b8e895531783b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:57:46.862482    1790 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.key
	I0307 18:57:46.955947    1790 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.crt ...
	I0307 18:57:46.955951    1790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.crt: {Name:mkef23d54f6d0c97775b5bd0237900887edfac93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:57:46.956123    1790 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.key ...
	I0307 18:57:46.956126    1790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.key: {Name:mk277e1e9e01577996a4fd1799fc0d3bc4bef856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:57:46.956252    1790 certs.go:256] generating profile certs ...
	I0307 18:57:46.956282    1790 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.key
	I0307 18:57:46.956288    1790 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt with IP's: []
	I0307 18:57:47.103673    1790 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt ...
	I0307 18:57:47.103681    1790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: {Name:mk090a111d96f29e72bb148cbd1151677732b655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:57:47.103882    1790 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.key ...
	I0307 18:57:47.103886    1790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.key: {Name:mkf40600ce091bfaa28b85e2d4caa992f699ee10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:57:47.104013    1790 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/apiserver.key.ffdfac98
	I0307 18:57:47.104023    1790 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/apiserver.crt.ffdfac98 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0307 18:57:47.226844    1790 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/apiserver.crt.ffdfac98 ...
	I0307 18:57:47.226848    1790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/apiserver.crt.ffdfac98: {Name:mk42de282605631a919242ec9fddbda5156872e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:57:47.226984    1790 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/apiserver.key.ffdfac98 ...
	I0307 18:57:47.226988    1790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/apiserver.key.ffdfac98: {Name:mk3fe80d7fc0870f20ce08dc183da65ca0777d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:57:47.227094    1790 certs.go:381] copying /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/apiserver.crt.ffdfac98 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/apiserver.crt
	I0307 18:57:47.227306    1790 certs.go:385] copying /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/apiserver.key.ffdfac98 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/apiserver.key
	I0307 18:57:47.227434    1790 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/proxy-client.key
	I0307 18:57:47.227445    1790 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/proxy-client.crt with IP's: []
	I0307 18:57:47.273601    1790 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/proxy-client.crt ...
	I0307 18:57:47.273605    1790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/proxy-client.crt: {Name:mkd5ee9cc6d54c8984a14a08b074e24a1b12dc60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:57:47.273755    1790 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/proxy-client.key ...
	I0307 18:57:47.273758    1790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/proxy-client.key: {Name:mkebd5772dafde5884d988e63e901e75c6813142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:57:47.274017    1790 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 18:57:47.274046    1790 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem (1082 bytes)
	I0307 18:57:47.274064    1790 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem (1123 bytes)
	I0307 18:57:47.274083    1790 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem (1675 bytes)
	I0307 18:57:47.274446    1790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 18:57:47.283590    1790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 18:57:47.291470    1790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 18:57:47.299364    1790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 18:57:47.307070    1790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0307 18:57:47.314953    1790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0307 18:57:47.322798    1790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 18:57:47.330904    1790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 18:57:47.338947    1790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 18:57:47.347153    1790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 18:57:47.353917    1790 ssh_runner.go:195] Run: openssl version
	I0307 18:57:47.356329    1790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 18:57:47.359847    1790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:57:47.361462    1790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:57 /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:57:47.361480    1790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:57:47.363709    1790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 18:57:47.367219    1790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 18:57:47.368552    1790 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 18:57:47.368583    1790 kubeadm.go:391] StartCluster: {Name:addons-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:addons-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:57:47.368643    1790 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 18:57:47.374434    1790 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 18:57:47.378211    1790 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 18:57:47.381681    1790 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 18:57:47.385134    1790 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 18:57:47.385140    1790 kubeadm.go:156] found existing configuration files:
	
	I0307 18:57:47.385163    1790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0307 18:57:47.388506    1790 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 18:57:47.388529    1790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 18:57:47.391631    1790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0307 18:57:47.394665    1790 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 18:57:47.394689    1790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 18:57:47.398053    1790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0307 18:57:47.401504    1790 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 18:57:47.401529    1790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 18:57:47.405287    1790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0307 18:57:47.408636    1790 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 18:57:47.408661    1790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 18:57:47.411868    1790 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 18:57:47.434062    1790 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0307 18:57:47.434100    1790 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 18:57:47.500841    1790 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 18:57:47.500891    1790 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 18:57:47.500930    1790 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 18:57:47.599070    1790 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 18:57:47.612220    1790 out.go:204]   - Generating certificates and keys ...
	I0307 18:57:47.612258    1790 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 18:57:47.612291    1790 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 18:57:47.764511    1790 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 18:57:47.874872    1790 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0307 18:57:47.941164    1790 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0307 18:57:48.072544    1790 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0307 18:57:48.177234    1790 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0307 18:57:48.177302    1790 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-935000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0307 18:57:48.243757    1790 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0307 18:57:48.243816    1790 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-935000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0307 18:57:48.293814    1790 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 18:57:48.366621    1790 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 18:57:48.415459    1790 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0307 18:57:48.415499    1790 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 18:57:48.471021    1790 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 18:57:48.544035    1790 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 18:57:48.633604    1790 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 18:57:48.686130    1790 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 18:57:48.686457    1790 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 18:57:48.687619    1790 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 18:57:48.691987    1790 out.go:204]   - Booting up control plane ...
	I0307 18:57:48.692038    1790 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 18:57:48.692074    1790 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 18:57:48.692110    1790 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 18:57:48.700446    1790 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 18:57:48.700785    1790 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 18:57:48.700808    1790 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 18:57:48.788315    1790 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 18:57:52.289348    1790 kubeadm.go:309] [apiclient] All control plane components are healthy after 3.501931 seconds
	I0307 18:57:52.289414    1790 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 18:57:52.294042    1790 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 18:57:52.803087    1790 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 18:57:52.803185    1790 kubeadm.go:309] [mark-control-plane] Marking the node addons-935000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 18:57:53.307324    1790 kubeadm.go:309] [bootstrap-token] Using token: 9brpxo.tiynv9rx4m8gmuyw
	I0307 18:57:53.316834    1790 out.go:204]   - Configuring RBAC rules ...
	I0307 18:57:53.316888    1790 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 18:57:53.316945    1790 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 18:57:53.319184    1790 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 18:57:53.320212    1790 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 18:57:53.321311    1790 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 18:57:53.322774    1790 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 18:57:53.329684    1790 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 18:57:53.517345    1790 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 18:57:53.713880    1790 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 18:57:53.714377    1790 kubeadm.go:309] 
	I0307 18:57:53.714409    1790 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 18:57:53.714414    1790 kubeadm.go:309] 
	I0307 18:57:53.714452    1790 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 18:57:53.714458    1790 kubeadm.go:309] 
	I0307 18:57:53.714471    1790 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 18:57:53.714511    1790 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 18:57:53.714544    1790 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 18:57:53.714550    1790 kubeadm.go:309] 
	I0307 18:57:53.714584    1790 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 18:57:53.714590    1790 kubeadm.go:309] 
	I0307 18:57:53.714623    1790 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 18:57:53.714626    1790 kubeadm.go:309] 
	I0307 18:57:53.714651    1790 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 18:57:53.714693    1790 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 18:57:53.714735    1790 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 18:57:53.714739    1790 kubeadm.go:309] 
	I0307 18:57:53.714780    1790 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 18:57:53.714818    1790 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 18:57:53.714821    1790 kubeadm.go:309] 
	I0307 18:57:53.714867    1790 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9brpxo.tiynv9rx4m8gmuyw \
	I0307 18:57:53.714916    1790 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8b467e6a6e778a0a4b26f2d605aedb7acbb7b86477eb1b7d8cc5affd5ffec0d5 \
	I0307 18:57:53.714929    1790 kubeadm.go:309] 	--control-plane 
	I0307 18:57:53.714934    1790 kubeadm.go:309] 
	I0307 18:57:53.714976    1790 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 18:57:53.714979    1790 kubeadm.go:309] 
	I0307 18:57:53.715028    1790 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9brpxo.tiynv9rx4m8gmuyw \
	I0307 18:57:53.715083    1790 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8b467e6a6e778a0a4b26f2d605aedb7acbb7b86477eb1b7d8cc5affd5ffec0d5 
	I0307 18:57:53.715141    1790 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 18:57:53.715149    1790 cni.go:84] Creating CNI manager for ""
	I0307 18:57:53.715157    1790 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 18:57:53.716640    1790 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 18:57:53.723990    1790 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 18:57:53.727745    1790 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 18:57:53.733187    1790 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 18:57:53.733234    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:57:53.733258    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-935000 minikube.k8s.io/updated_at=2024_03_07T18_57_53_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=addons-935000 minikube.k8s.io/primary=true
	I0307 18:57:53.796370    1790 ops.go:34] apiserver oom_adj: -16
	I0307 18:57:53.796442    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:57:54.298501    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:57:54.798488    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:57:55.298468    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:57:55.798450    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:57:56.298462    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:57:56.798470    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:57:57.298461    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:57:57.798410    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:57:58.298421    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:57:58.796516    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:57:59.298364    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:57:59.798417    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:00.298355    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:00.798318    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:01.298300    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:01.798331    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:02.298291    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:02.798255    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:03.298280    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:03.798207    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:04.298248    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:04.797187    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:05.298239    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:05.798204    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:06.298133    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:06.798153    1790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:58:06.833599    1790 kubeadm.go:1106] duration metric: took 13.100769125s to wait for elevateKubeSystemPrivileges
	W0307 18:58:06.833646    1790 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 18:58:06.833651    1790 kubeadm.go:393] duration metric: took 19.465623208s to StartCluster
	I0307 18:58:06.833661    1790 settings.go:142] acquiring lock: {Name:mka91134012bc21ec54a241fdaa124189f2aed4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:58:06.833810    1790 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 18:58:06.833998    1790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/kubeconfig: {Name:mk4dcca67acc40e2ef9a6fcc3838689fa74c4a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:58:06.834222    1790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 18:58:06.834241    1790 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 18:58:06.838722    1790 out.go:177] * Verifying Kubernetes components...
	I0307 18:58:06.834271    1790 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0307 18:58:06.838748    1790 addons.go:69] Setting yakd=true in profile "addons-935000"
	I0307 18:58:06.838760    1790 addons.go:234] Setting addon yakd=true in "addons-935000"
	I0307 18:58:06.838775    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:06.838822    1790 addons.go:69] Setting ingress-dns=true in profile "addons-935000"
	I0307 18:58:06.838833    1790 addons.go:234] Setting addon ingress-dns=true in "addons-935000"
	I0307 18:58:06.838847    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:06.838867    1790 addons.go:69] Setting registry=true in profile "addons-935000"
	I0307 18:58:06.838874    1790 addons.go:234] Setting addon registry=true in "addons-935000"
	I0307 18:58:06.838881    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:06.838863    1790 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-935000"
	I0307 18:58:06.838892    1790 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-935000"
	I0307 18:58:06.838954    1790 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-935000"
	I0307 18:58:06.838978    1790 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-935000"
	I0307 18:58:06.838996    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:06.839011    1790 addons.go:69] Setting inspektor-gadget=true in profile "addons-935000"
	I0307 18:58:06.838999    1790 addons.go:69] Setting storage-provisioner=true in profile "addons-935000"
	I0307 18:58:06.839020    1790 addons.go:234] Setting addon inspektor-gadget=true in "addons-935000"
	I0307 18:58:06.839032    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:06.839043    1790 addons.go:234] Setting addon storage-provisioner=true in "addons-935000"
	I0307 18:58:06.839111    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:06.839153    1790 addons.go:69] Setting metrics-server=true in profile "addons-935000"
	I0307 18:58:06.839161    1790 addons.go:234] Setting addon metrics-server=true in "addons-935000"
	I0307 18:58:06.834455    1790 config.go:182] Loaded profile config "addons-935000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 18:58:06.839170    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:06.839204    1790 addons.go:69] Setting cloud-spanner=true in profile "addons-935000"
	I0307 18:58:06.839259    1790 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-935000"
	I0307 18:58:06.839234    1790 addons.go:234] Setting addon cloud-spanner=true in "addons-935000"
	I0307 18:58:06.839268    1790 addons.go:69] Setting gcp-auth=true in profile "addons-935000"
	I0307 18:58:06.839281    1790 mustload.go:65] Loading cluster: addons-935000
	I0307 18:58:06.839307    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:06.839335    1790 retry.go:31] will retry after 521.663447ms: connect: dial unix /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/monitor: connect: connection refused
	I0307 18:58:06.839341    1790 config.go:182] Loaded profile config "addons-935000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 18:58:06.839349    1790 retry.go:31] will retry after 536.036959ms: connect: dial unix /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/monitor: connect: connection refused
	I0307 18:58:06.839388    1790 retry.go:31] will retry after 609.456266ms: connect: dial unix /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/monitor: connect: connection refused
	I0307 18:58:06.839395    1790 addons.go:69] Setting ingress=true in profile "addons-935000"
	I0307 18:58:06.839400    1790 addons.go:234] Setting addon ingress=true in "addons-935000"
	I0307 18:58:06.839409    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:06.839469    1790 retry.go:31] will retry after 823.345861ms: connect: dial unix /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/monitor: connect: connection refused
	I0307 18:58:06.839474    1790 addons.go:69] Setting default-storageclass=true in profile "addons-935000"
	I0307 18:58:06.839482    1790 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-935000"
	I0307 18:58:06.839531    1790 retry.go:31] will retry after 711.782355ms: connect: dial unix /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/monitor: connect: connection refused
	I0307 18:58:06.839534    1790 addons.go:69] Setting volumesnapshots=true in profile "addons-935000"
	I0307 18:58:06.839534    1790 retry.go:31] will retry after 1.066236747s: connect: dial unix /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/monitor: connect: connection refused
	I0307 18:58:06.839540    1790 addons.go:234] Setting addon volumesnapshots=true in "addons-935000"
	I0307 18:58:06.839265    1790 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-935000"
	I0307 18:58:06.839590    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:06.839603    1790 retry.go:31] will retry after 1.45478284s: connect: dial unix /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/monitor: connect: connection refused
	I0307 18:58:06.839614    1790 retry.go:31] will retry after 744.436802ms: connect: dial unix /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/monitor: connect: connection refused
	I0307 18:58:06.839622    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:06.846658    1790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:58:06.839681    1790 retry.go:31] will retry after 521.303317ms: connect: dial unix /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/monitor: connect: connection refused
	I0307 18:58:06.839710    1790 retry.go:31] will retry after 598.560887ms: connect: dial unix /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/monitor: connect: connection refused
	I0307 18:58:06.839775    1790 retry.go:31] will retry after 1.037727407s: connect: dial unix /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/monitor: connect: connection refused
	I0307 18:58:06.850471    1790 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0307 18:58:06.841163    1790 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-935000"
	I0307 18:58:06.858619    1790 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0307 18:58:06.854629    1790 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0307 18:58:06.854670    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:06.862653    1790 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0307 18:58:06.862665    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0307 18:58:06.862674    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:06.862650    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0307 18:58:06.862703    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:06.867652    1790 out.go:177]   - Using image docker.io/busybox:stable
	I0307 18:58:06.871680    1790 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0307 18:58:06.875688    1790 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 18:58:06.875695    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0307 18:58:06.875702    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:06.893659    1790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0307 18:58:06.971246    1790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 18:58:07.052017    1790 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0307 18:58:07.052029    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0307 18:58:07.057729    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 18:58:07.064417    1790 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0307 18:58:07.064428    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0307 18:58:07.095236    1790 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0307 18:58:07.095247    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0307 18:58:07.110021    1790 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0307 18:58:07.110034    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0307 18:58:07.132047    1790 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0307 18:58:07.132057    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0307 18:58:07.147365    1790 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0307 18:58:07.147377    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0307 18:58:07.151949    1790 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0307 18:58:07.151964    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0307 18:58:07.158694    1790 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0307 18:58:07.158706    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0307 18:58:07.163323    1790 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0307 18:58:07.163335    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0307 18:58:07.170341    1790 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 18:58:07.170353    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0307 18:58:07.174976    1790 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0307 18:58:07.174986    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0307 18:58:07.181582    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 18:58:07.185676    1790 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 18:58:07.185683    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0307 18:58:07.201709    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 18:58:07.368446    1790 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0307 18:58:07.372444    1790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0307 18:58:07.376402    1790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0307 18:58:07.382383    1790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0307 18:58:07.382457    1790 retry.go:31] will retry after 1.307470291s: connect: dial unix /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/monitor: connect: connection refused
	I0307 18:58:07.394356    1790 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0307 18:58:07.390451    1790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0307 18:58:07.398472    1790 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 18:58:07.406382    1790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0307 18:58:07.402460    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0307 18:58:07.410428    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:07.413357    1790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0307 18:58:07.417405    1790 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0307 18:58:07.421432    1790 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0307 18:58:07.421441    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0307 18:58:07.421451    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:07.447131    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 18:58:07.450378    1790 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0307 18:58:07.454442    1790 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0307 18:58:07.454455    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0307 18:58:07.454466    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:07.459369    1790 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0307 18:58:07.463381    1790 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0307 18:58:07.463389    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0307 18:58:07.463398    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:07.556441    1790 out.go:177]   - Using image docker.io/registry:2.8.3
	I0307 18:58:07.559359    1790 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0307 18:58:07.563422    1790 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0307 18:58:07.563431    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0307 18:58:07.563441    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:07.603991    1790 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 18:58:07.590942    1790 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0307 18:58:07.605284    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0307 18:58:07.608427    1790 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0307 18:58:07.618464    1790 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 18:58:07.624533    1790 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 18:58:07.624556    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0307 18:58:07.624568    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:07.663484    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:07.749020    1790 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0307 18:58:07.749033    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0307 18:58:07.749384    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0307 18:58:07.787954    1790 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0307 18:58:07.787963    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0307 18:58:07.861438    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 18:58:07.868009    1790 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0307 18:58:07.868021    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0307 18:58:07.872489    1790 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0307 18:58:07.872498    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0307 18:58:07.877699    1790 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0307 18:58:07.877709    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0307 18:58:07.889376    1790 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0307 18:58:07.893427    1790 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 18:58:07.893435    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0307 18:58:07.893444    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:07.898249    1790 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 18:58:07.898261    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0307 18:58:07.911253    1790 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0307 18:58:07.914399    1790 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0307 18:58:07.914406    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0307 18:58:07.914417    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:07.937761    1790 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0307 18:58:07.937774    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0307 18:58:07.942327    1790 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0307 18:58:07.942337    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0307 18:58:07.961477    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 18:58:07.998634    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0307 18:58:08.006859    1790 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0307 18:58:08.006874    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0307 18:58:08.041994    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 18:58:08.064433    1790 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0307 18:58:08.064444    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0307 18:58:08.083143    1790 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.189495625s)
	I0307 18:58:08.083159    1790 start.go:948] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0307 18:58:08.083148    1790 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.111920958s)
	I0307 18:58:08.083622    1790 node_ready.go:35] waiting up to 6m0s for node "addons-935000" to be "Ready" ...
	I0307 18:58:08.086193    1790 node_ready.go:49] node "addons-935000" has status "Ready":"True"
	I0307 18:58:08.086212    1790 node_ready.go:38] duration metric: took 2.570292ms for node "addons-935000" to be "Ready" ...
	I0307 18:58:08.086227    1790 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:58:08.092272    1790 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace to be "Ready" ...
	I0307 18:58:08.145246    1790 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0307 18:58:08.145259    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0307 18:58:08.147975    1790 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0307 18:58:08.147981    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0307 18:58:08.196682    1790 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0307 18:58:08.196692    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0307 18:58:08.220357    1790 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0307 18:58:08.220366    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0307 18:58:08.288510    1790 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0307 18:58:08.288521    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0307 18:58:08.296689    1790 addons.go:234] Setting addon default-storageclass=true in "addons-935000"
	I0307 18:58:08.296708    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:08.297429    1790 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 18:58:08.297437    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 18:58:08.297444    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:08.300845    1790 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0307 18:58:08.300853    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0307 18:58:08.397870    1790 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0307 18:58:08.397884    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0307 18:58:08.440503    1790 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 18:58:08.440516    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0307 18:58:08.499061    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0307 18:58:08.553648    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 18:58:08.618322    1790 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-935000" context rescaled to 1 replicas
	I0307 18:58:08.634510    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 18:58:08.704765    1790 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 18:58:08.708821    1790 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:58:08.708832    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 18:58:08.708843    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:08.967715    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.910017917s)
	I0307 18:58:08.967756    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.786212791s)
	W0307 18:58:08.967777    1790 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 18:58:08.967790    1790 retry.go:31] will retry after 125.10399ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 18:58:09.093081    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 18:58:09.101084    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:58:09.375628    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.92853325s)
	I0307 18:58:09.375636    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.62629125s)
	I0307 18:58:09.386189    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.184519959s)
	I0307 18:58:10.097452    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:10.596576    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.735197834s)
	I0307 18:58:10.596607    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.635192334s)
	I0307 18:58:10.596615    1790 addons.go:470] Verifying addon metrics-server=true in "addons-935000"
	I0307 18:58:10.596608    1790 addons.go:470] Verifying addon ingress=true in "addons-935000"
	I0307 18:58:10.600645    1790 out.go:177] * Verifying ingress addon...
	I0307 18:58:10.596669    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.59809675s)
	I0307 18:58:10.596677    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.554748083s)
	I0307 18:58:10.596706    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.097694166s)
	I0307 18:58:10.607649    1790 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-935000 service yakd-dashboard -n yakd-dashboard
	
	I0307 18:58:10.600661    1790 addons.go:470] Verifying addon registry=true in "addons-935000"
	I0307 18:58:10.621642    1790 out.go:177] * Verifying registry addon...
	I0307 18:58:10.617781    1790 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0307 18:58:10.626938    1790 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0307 18:58:10.627590    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:10.627978    1790 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0307 18:58:10.631750    1790 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0307 18:58:10.631759    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:10.952116    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.317657375s)
	I0307 18:58:10.952116    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.398515042s)
	I0307 18:58:10.952185    1790 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-935000"
	I0307 18:58:10.952214    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.85451925s)
	I0307 18:58:10.960577    1790 out.go:177] * Verifying csi-hostpath-driver addon...
	I0307 18:58:10.952234    1790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.851191625s)
	I0307 18:58:10.967142    1790 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0307 18:58:10.978139    1790 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0307 18:58:10.978151    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:11.126191    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:11.130525    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:11.469733    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:11.625337    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:11.630479    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:11.971104    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:12.128352    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:12.131029    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:12.471924    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:12.596232    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:12.625372    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:12.630232    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:12.971652    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:13.125470    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:13.130299    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:13.470544    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:13.625821    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:13.630897    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:13.971673    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:14.125265    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:14.130387    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:14.268951    1790 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0307 18:58:14.268968    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:14.298839    1790 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0307 18:58:14.304832    1790 addons.go:234] Setting addon gcp-auth=true in "addons-935000"
	I0307 18:58:14.304854    1790 host.go:66] Checking if "addons-935000" exists ...
	I0307 18:58:14.305755    1790 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0307 18:58:14.305762    1790 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/addons-935000/id_rsa Username:docker}
	I0307 18:58:14.336522    1790 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0307 18:58:14.345388    1790 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 18:58:14.349375    1790 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0307 18:58:14.349380    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0307 18:58:14.355125    1790 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0307 18:58:14.355131    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0307 18:58:14.360594    1790 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 18:58:14.360601    1790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0307 18:58:14.367885    1790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 18:58:14.472111    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:14.588646    1790 addons.go:470] Verifying addon gcp-auth=true in "addons-935000"
	I0307 18:58:14.595082    1790 out.go:177] * Verifying gcp-auth addon...
	I0307 18:58:14.604609    1790 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0307 18:58:14.610502    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:14.610630    1790 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0307 18:58:14.610636    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:14.625548    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:14.630253    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:14.972372    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:15.107891    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:15.125735    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:15.130210    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:15.471784    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:15.607489    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:15.624795    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:15.630451    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:15.972522    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:16.107427    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:16.127114    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:16.129892    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:16.471961    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:16.607944    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:16.625120    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:16.630324    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:16.971850    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:17.096951    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:17.107580    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:17.125596    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:17.130278    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:17.471685    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:17.607646    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:17.625296    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:17.630100    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:17.971950    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:18.107770    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:18.127642    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:18.129705    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:18.471632    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:18.607785    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:18.625153    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:18.630264    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:18.973852    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:19.108025    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:19.125196    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:19.130219    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:19.472084    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:19.597063    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:19.607182    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:19.625129    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:19.630339    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:19.971886    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:20.107712    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:20.125614    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:20.130234    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:20.471708    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:20.607607    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:20.625193    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:20.629935    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:20.970278    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:21.107832    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:21.125344    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:21.130091    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:21.469965    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:21.607609    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:21.625296    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:21.630059    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:21.971715    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:22.096858    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:22.108577    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:22.125662    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:22.129920    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:22.470783    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:22.606000    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:22.625079    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:22.630282    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:22.971942    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:23.108115    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:23.124561    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:23.130318    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:23.471791    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:23.608011    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:23.625170    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:23.630092    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:23.972278    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:24.107786    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:24.125077    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:24.130083    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:24.472027    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:24.596786    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:24.607591    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:24.625061    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:24.630296    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:24.971793    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:25.107572    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:25.125096    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:25.129891    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:25.471788    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:25.607528    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:25.624593    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:25.630672    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:25.971788    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:26.107630    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:26.125372    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:26.130447    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:26.473115    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:26.597268    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:26.608048    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:26.626653    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:26.629553    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:26.971879    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:27.108301    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:27.125098    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:27.129780    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:27.471495    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:27.607186    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:27.625176    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:27.630020    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:27.971609    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:28.107011    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:28.125168    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:28.129735    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:28.473065    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:28.607559    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:28.625028    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:28.629880    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:28.971525    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:29.094210    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:29.107551    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:29.124935    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:29.129699    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:29.471741    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:29.607410    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:29.624914    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:29.629726    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:29.971558    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:30.107730    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:30.124995    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:30.129930    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:30.471497    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:30.607676    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:30.625335    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:30.630028    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:30.971297    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:31.096396    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:31.107367    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:31.124999    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:31.129798    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:31.471365    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:31.607695    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:31.624858    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:31.629963    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:31.971759    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:32.107349    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:32.124851    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:32.129767    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:32.471272    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:32.607388    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:32.624813    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:32.629722    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:32.969409    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:33.096568    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:33.107408    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:33.125277    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:33.129803    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:33.469693    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:33.607293    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:33.625158    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:33.629618    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:33.969570    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:34.107252    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:34.125003    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:34.129684    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:34.471268    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:34.606991    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:34.624935    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:34.629722    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:34.971268    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:35.107436    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:35.125365    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:35.129648    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:35.471081    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:35.596665    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:35.607080    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:35.624710    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:35.629937    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:35.971212    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:36.107408    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:36.124755    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:36.129603    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:36.471546    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:36.605827    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:36.624374    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:36.629810    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:36.971205    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:37.107526    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:37.124891    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:37.151390    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:37.477395    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:37.607178    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:37.624715    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:37.629688    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:37.971131    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:38.096283    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:38.107290    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:38.125017    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:38.130741    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:38.470294    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:38.607349    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:38.625625    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:38.629950    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:38.975124    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:39.107216    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:39.125480    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:39.129579    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:39.471322    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:39.607049    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:39.624705    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:39.629526    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:39.971039    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:40.096296    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:40.107079    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:40.125110    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:40.129532    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:40.471058    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:40.607247    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:40.624933    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:40.630042    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:40.971117    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:41.107167    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:41.124876    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:41.129344    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:41.471176    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:41.607841    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:41.625053    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:41.630384    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:41.971134    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:42.097386    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:42.106934    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:42.124573    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:42.130161    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:42.471605    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:42.607046    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:42.625061    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:42.630737    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:42.970129    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:43.107091    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:43.124486    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:43.129639    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:43.471279    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:43.607213    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:43.625138    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:43.629596    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:43.971303    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:44.107200    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:44.124532    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:44.129732    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:44.471030    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:44.596399    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:44.607205    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:44.624935    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:44.629223    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:44.971501    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:45.107291    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:45.124641    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:45.129267    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:45.470899    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:45.606899    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:45.624292    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:45.629299    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:45.969249    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:46.106984    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:46.124273    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:46.129618    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:46.470843    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:46.607025    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:46.624481    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:46.629362    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:46.969744    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:47.095664    1790 pod_ready.go:102] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"False"
	I0307 18:58:47.104908    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:47.124359    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:47.129319    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:47.470833    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:47.595969    1790 pod_ready.go:92] pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace has status "Ready":"True"
	I0307 18:58:47.595977    1790 pod_ready.go:81] duration metric: took 39.504816666s for pod "coredns-5dd5756b68-jdvxt" in "kube-system" namespace to be "Ready" ...
	I0307 18:58:47.595982    1790 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ndqc4" in "kube-system" namespace to be "Ready" ...
	I0307 18:58:47.596962    1790 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ndqc4" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ndqc4" not found
	I0307 18:58:47.596968    1790 pod_ready.go:81] duration metric: took 982.583µs for pod "coredns-5dd5756b68-ndqc4" in "kube-system" namespace to be "Ready" ...
	E0307 18:58:47.596971    1790 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ndqc4" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ndqc4" not found
	I0307 18:58:47.596975    1790 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-935000" in "kube-system" namespace to be "Ready" ...
	I0307 18:58:47.599058    1790 pod_ready.go:92] pod "etcd-addons-935000" in "kube-system" namespace has status "Ready":"True"
	I0307 18:58:47.599064    1790 pod_ready.go:81] duration metric: took 2.085667ms for pod "etcd-addons-935000" in "kube-system" namespace to be "Ready" ...
	I0307 18:58:47.599068    1790 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-935000" in "kube-system" namespace to be "Ready" ...
	I0307 18:58:47.601123    1790 pod_ready.go:92] pod "kube-apiserver-addons-935000" in "kube-system" namespace has status "Ready":"True"
	I0307 18:58:47.601127    1790 pod_ready.go:81] duration metric: took 2.056709ms for pod "kube-apiserver-addons-935000" in "kube-system" namespace to be "Ready" ...
	I0307 18:58:47.601130    1790 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-935000" in "kube-system" namespace to be "Ready" ...
	I0307 18:58:47.609985    1790 pod_ready.go:92] pod "kube-controller-manager-addons-935000" in "kube-system" namespace has status "Ready":"True"
	I0307 18:58:47.609995    1790 pod_ready.go:81] duration metric: took 8.862209ms for pod "kube-controller-manager-addons-935000" in "kube-system" namespace to be "Ready" ...
	I0307 18:58:47.610000    1790 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-prt8x" in "kube-system" namespace to be "Ready" ...
	I0307 18:58:47.610289    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:47.624173    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:47.629304    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:47.796879    1790 pod_ready.go:92] pod "kube-proxy-prt8x" in "kube-system" namespace has status "Ready":"True"
	I0307 18:58:47.796889    1790 pod_ready.go:81] duration metric: took 186.890375ms for pod "kube-proxy-prt8x" in "kube-system" namespace to be "Ready" ...
	I0307 18:58:47.796894    1790 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-935000" in "kube-system" namespace to be "Ready" ...
	I0307 18:58:47.970883    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:48.107502    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:48.124593    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:48.129417    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:48.197092    1790 pod_ready.go:92] pod "kube-scheduler-addons-935000" in "kube-system" namespace has status "Ready":"True"
	I0307 18:58:48.197101    1790 pod_ready.go:81] duration metric: took 400.215417ms for pod "kube-scheduler-addons-935000" in "kube-system" namespace to be "Ready" ...
	I0307 18:58:48.197105    1790 pod_ready.go:38] duration metric: took 40.112013667s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:58:48.197114    1790 api_server.go:52] waiting for apiserver process to appear ...
	I0307 18:58:48.197172    1790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:58:48.203998    1790 api_server.go:72] duration metric: took 41.370921667s to wait for apiserver process to appear ...
	I0307 18:58:48.204008    1790 api_server.go:88] waiting for apiserver healthz status ...
	I0307 18:58:48.204016    1790 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0307 18:58:48.206935    1790 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0307 18:58:48.207531    1790 api_server.go:141] control plane version: v1.28.4
	I0307 18:58:48.207537    1790 api_server.go:131] duration metric: took 3.5265ms to wait for apiserver health ...
	I0307 18:58:48.207540    1790 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 18:58:48.400397    1790 system_pods.go:59] 17 kube-system pods found
	I0307 18:58:48.400411    1790 system_pods.go:61] "coredns-5dd5756b68-jdvxt" [87ec74f9-2a1f-4c21-ae77-40c0361fdf65] Running
	I0307 18:58:48.400414    1790 system_pods.go:61] "csi-hostpath-attacher-0" [685d049d-9f47-4cad-999d-967644aaa0a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0307 18:58:48.400418    1790 system_pods.go:61] "csi-hostpath-resizer-0" [bc1792cf-a3a3-4d91-be0e-cac0cac4d0b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0307 18:58:48.400421    1790 system_pods.go:61] "csi-hostpathplugin-57vqg" [3e050b98-573c-4b76-b0f0-2afd4065e133] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 18:58:48.400423    1790 system_pods.go:61] "etcd-addons-935000" [1257b222-899c-4831-bad8-40adf066fede] Running
	I0307 18:58:48.400425    1790 system_pods.go:61] "kube-apiserver-addons-935000" [176b2e50-a35e-4f5b-aada-bc9db9705ba7] Running
	I0307 18:58:48.400427    1790 system_pods.go:61] "kube-controller-manager-addons-935000" [1064a953-6e80-440e-ada4-da7b7c73079d] Running
	I0307 18:58:48.400429    1790 system_pods.go:61] "kube-ingress-dns-minikube" [98a50817-d269-477c-a4bf-9a3d91183ab8] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 18:58:48.400431    1790 system_pods.go:61] "kube-proxy-prt8x" [936fa942-35ea-4bcb-ad00-2082a2878ecb] Running
	I0307 18:58:48.400432    1790 system_pods.go:61] "kube-scheduler-addons-935000" [c02b987f-1753-4a08-997e-122bb96e55e6] Running
	I0307 18:58:48.400435    1790 system_pods.go:61] "metrics-server-69cf46c98-758pv" [2b344e4d-ec86-4cc1-8e72-496ea70077fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0307 18:58:48.400438    1790 system_pods.go:61] "nvidia-device-plugin-daemonset-kvqmd" [80c34cce-9d7a-45ca-a749-4d2c17971304] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0307 18:58:48.400440    1790 system_pods.go:61] "registry-c6628" [991b6c5f-be42-4c1c-8513-cced7da00d13] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0307 18:58:48.400443    1790 system_pods.go:61] "registry-proxy-4g8rw" [a32e6832-a04d-4fcc-91b1-07b60c6e543c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0307 18:58:48.400445    1790 system_pods.go:61] "snapshot-controller-58dbcc7b99-kdgrl" [da4a5839-3e45-4846-a72d-974eab0cfb1f] Running
	I0307 18:58:48.400447    1790 system_pods.go:61] "snapshot-controller-58dbcc7b99-nsnvx" [991b0d17-28c6-4da8-b503-b6f7835576b2] Running
	I0307 18:58:48.400448    1790 system_pods.go:61] "storage-provisioner" [75b6aae7-29d1-4f6b-a408-cbdb951d00c9] Running
	I0307 18:58:48.400451    1790 system_pods.go:74] duration metric: took 192.914416ms to wait for pod list to return data ...
	I0307 18:58:48.400456    1790 default_sa.go:34] waiting for default service account to be created ...
	I0307 18:58:48.470724    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:48.596475    1790 default_sa.go:45] found service account: "default"
	I0307 18:58:48.596484    1790 default_sa.go:55] duration metric: took 196.030709ms for default service account to be created ...
	I0307 18:58:48.596488    1790 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 18:58:48.607560    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:48.624973    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:48.629226    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:48.799971    1790 system_pods.go:86] 17 kube-system pods found
	I0307 18:58:48.799981    1790 system_pods.go:89] "coredns-5dd5756b68-jdvxt" [87ec74f9-2a1f-4c21-ae77-40c0361fdf65] Running
	I0307 18:58:48.799985    1790 system_pods.go:89] "csi-hostpath-attacher-0" [685d049d-9f47-4cad-999d-967644aaa0a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0307 18:58:48.799988    1790 system_pods.go:89] "csi-hostpath-resizer-0" [bc1792cf-a3a3-4d91-be0e-cac0cac4d0b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0307 18:58:48.799992    1790 system_pods.go:89] "csi-hostpathplugin-57vqg" [3e050b98-573c-4b76-b0f0-2afd4065e133] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 18:58:48.799994    1790 system_pods.go:89] "etcd-addons-935000" [1257b222-899c-4831-bad8-40adf066fede] Running
	I0307 18:58:48.799996    1790 system_pods.go:89] "kube-apiserver-addons-935000" [176b2e50-a35e-4f5b-aada-bc9db9705ba7] Running
	I0307 18:58:48.799998    1790 system_pods.go:89] "kube-controller-manager-addons-935000" [1064a953-6e80-440e-ada4-da7b7c73079d] Running
	I0307 18:58:48.800001    1790 system_pods.go:89] "kube-ingress-dns-minikube" [98a50817-d269-477c-a4bf-9a3d91183ab8] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 18:58:48.800003    1790 system_pods.go:89] "kube-proxy-prt8x" [936fa942-35ea-4bcb-ad00-2082a2878ecb] Running
	I0307 18:58:48.800005    1790 system_pods.go:89] "kube-scheduler-addons-935000" [c02b987f-1753-4a08-997e-122bb96e55e6] Running
	I0307 18:58:48.800008    1790 system_pods.go:89] "metrics-server-69cf46c98-758pv" [2b344e4d-ec86-4cc1-8e72-496ea70077fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0307 18:58:48.800014    1790 system_pods.go:89] "nvidia-device-plugin-daemonset-kvqmd" [80c34cce-9d7a-45ca-a749-4d2c17971304] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0307 18:58:48.800018    1790 system_pods.go:89] "registry-c6628" [991b6c5f-be42-4c1c-8513-cced7da00d13] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0307 18:58:48.800020    1790 system_pods.go:89] "registry-proxy-4g8rw" [a32e6832-a04d-4fcc-91b1-07b60c6e543c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0307 18:58:48.800023    1790 system_pods.go:89] "snapshot-controller-58dbcc7b99-kdgrl" [da4a5839-3e45-4846-a72d-974eab0cfb1f] Running
	I0307 18:58:48.800026    1790 system_pods.go:89] "snapshot-controller-58dbcc7b99-nsnvx" [991b0d17-28c6-4da8-b503-b6f7835576b2] Running
	I0307 18:58:48.800028    1790 system_pods.go:89] "storage-provisioner" [75b6aae7-29d1-4f6b-a408-cbdb951d00c9] Running
	I0307 18:58:48.800031    1790 system_pods.go:126] duration metric: took 203.545667ms to wait for k8s-apps to be running ...
	I0307 18:58:48.800035    1790 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 18:58:48.800084    1790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:58:48.806882    1790 system_svc.go:56] duration metric: took 6.844416ms WaitForService to wait for kubelet
	I0307 18:58:48.806892    1790 kubeadm.go:576] duration metric: took 41.973833959s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 18:58:48.806903    1790 node_conditions.go:102] verifying NodePressure condition ...
	I0307 18:58:48.970656    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:48.996728    1790 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 18:58:48.996741    1790 node_conditions.go:123] node cpu capacity is 2
	I0307 18:58:48.996746    1790 node_conditions.go:105] duration metric: took 189.845708ms to run NodePressure ...
	I0307 18:58:48.996753    1790 start.go:240] waiting for startup goroutines ...
	I0307 18:58:49.107513    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:49.124445    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:49.129546    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:49.470830    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:49.611965    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:49.624673    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:49.629415    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:49.970915    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:50.107405    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:50.124464    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:50.129426    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:50.470696    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:50.607282    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:50.624397    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:50.629139    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:50.970925    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:51.107499    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:51.124280    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:51.129129    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:51.470693    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:51.607151    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:51.624366    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:51.629196    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:51.970661    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:52.107023    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:52.124311    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:52.129237    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:52.471115    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:52.607051    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:52.624218    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:52.629169    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:52.970541    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:53.107019    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:53.124276    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:53.129138    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:53.470678    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:53.606909    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:53.624344    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:53.629507    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:53.970664    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:54.107145    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:54.124245    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:54.129156    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:54.470746    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:54.607202    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:54.624364    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:54.628963    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:54.970776    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:55.107163    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:55.124340    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:55.129287    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:55.471044    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:55.607142    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:55.624642    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:55.629093    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:55.969225    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:56.107057    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:56.124237    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:56.129142    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:56.469159    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:56.606943    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:56.624204    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:56.629045    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:56.973912    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:57.107118    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:57.124203    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:57.129122    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:57.469636    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:57.607006    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:57.624266    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:57.629219    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:57.970516    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:58.107251    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:58.124469    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:58.129429    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:58.471896    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:58.607153    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:58.624037    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:58.629120    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:58.970781    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:59.107154    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:59.124026    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:59.129167    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:59.470637    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:58:59.607133    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:58:59.624361    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:58:59.629000    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:58:59.970553    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:00.107431    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:00.124289    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:00.129128    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 18:59:00.470413    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:00.607009    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:00.624306    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:00.629438    1790 kapi.go:107] duration metric: took 50.002879834s to wait for kubernetes.io/minikube-addons=registry ...
	I0307 18:59:00.971730    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:01.107814    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:01.124187    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:01.469374    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:01.605249    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:01.624334    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:01.970349    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:02.107131    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:02.124296    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:02.470638    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:02.606855    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:02.624006    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:02.970437    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:03.129243    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:03.130712    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:03.470436    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:03.606671    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:03.623865    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:03.970514    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:04.106949    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:04.124028    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:04.470045    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:04.606774    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:04.624083    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:04.970882    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:05.108022    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:05.124177    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:05.470482    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:05.606911    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:05.624160    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:05.970641    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:06.107158    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:06.123478    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:06.470569    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:06.606705    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:06.623997    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:07.017325    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:07.107074    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:07.123941    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:07.470334    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:07.607156    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:07.623888    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:07.969172    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:08.106986    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:08.123822    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:08.470495    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:08.606632    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:08.623774    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:08.970355    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:09.106527    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:09.123964    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:09.468433    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:09.606610    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:09.623808    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:09.970230    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:10.107240    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:10.123816    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:10.469483    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:10.606487    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:10.623732    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:10.970303    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:11.106679    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:11.123756    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:11.471576    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:11.606697    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:11.623749    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:11.970589    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:12.106740    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:12.123815    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:12.472665    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:12.606561    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:12.623776    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:12.970160    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:13.106586    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:13.122115    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:13.470401    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:13.606434    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:13.623573    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:13.970267    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:14.106497    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:14.123549    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:14.470162    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:14.606535    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:14.624036    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:14.970153    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:15.106528    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:15.123592    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:15.469944    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:15.606567    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:15.623539    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:15.970518    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:16.106808    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:16.124077    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:16.470275    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:16.606617    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:16.623528    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:16.970192    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:17.106610    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:17.123674    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:17.470300    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:17.606617    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:17.623473    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:17.970119    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:18.104464    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:18.123463    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:18.469836    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:18.606459    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:18.623770    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:18.969988    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:19.106519    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:19.124037    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:19.470138    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:19.606537    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:19.623617    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:19.970092    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:20.106487    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:20.123700    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:20.470178    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:20.606416    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:20.623499    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:20.969780    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:21.106155    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:21.124352    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:21.469729    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:21.606448    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:21.624086    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:21.970411    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:22.106289    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:22.123472    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:22.469738    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:22.606409    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:22.624229    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:22.969617    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:23.106227    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:23.123322    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:23.470174    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:23.606257    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:23.623791    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:23.969949    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:24.106467    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:24.123321    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:24.469915    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:24.606475    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:24.623612    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:24.970131    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:25.106527    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:25.123644    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:25.469859    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:25.609596    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:25.625861    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:25.969910    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:26.104447    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:26.123306    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:26.470174    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:26.606290    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:26.623377    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:26.970452    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:27.106299    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:27.123569    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:27.469731    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:27.606173    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:27.623342    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:27.970122    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:28.106093    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:28.123477    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:28.469932    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:28.605681    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:28.623678    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:28.969894    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:29.106238    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:29.123510    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:29.469735    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:29.606144    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:29.623323    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:29.970294    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:30.106337    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:30.121768    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:30.470190    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:30.606063    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:30.622670    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:30.969900    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:31.105898    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:31.123187    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:31.469870    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:31.605976    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:31.623122    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:31.969798    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:32.105997    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:32.123172    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:32.471197    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:32.605922    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:32.624421    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:32.969610    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:33.105511    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:33.123234    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:33.469866    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:33.605915    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:33.623263    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:33.969591    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:34.106451    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:34.123091    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:34.469769    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:34.605973    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:34.623173    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:34.969572    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:35.106012    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:35.122743    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:35.469573    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:35.605817    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:35.623071    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:35.969372    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:36.106091    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:36.122934    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:36.468364    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:36.606070    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:36.623053    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:36.969403    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:37.106143    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:37.123197    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:37.469794    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:37.605905    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:37.623596    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:37.969814    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:38.106095    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:38.123086    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:38.469961    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:38.605604    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:38.622951    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:38.969476    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:39.105836    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:39.122962    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:39.467561    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:39.606123    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:39.622919    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:39.968717    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:40.105672    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:40.122966    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:40.469088    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:40.606130    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:40.623518    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:40.969371    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:41.105790    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:41.122967    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:41.469318    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:41.605883    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:41.622841    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:41.969507    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:42.105971    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:42.122948    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:42.469408    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:42.605563    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:42.622954    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:42.969033    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:43.106502    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:43.122952    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:43.469449    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:43.605600    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:43.622763    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:43.969463    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:44.105512    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:44.123591    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:44.469330    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:44.605551    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:44.622782    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:44.969090    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:45.105742    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:45.122829    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:45.469393    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:45.605682    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:45.622895    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:45.969231    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:46.105525    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:46.122749    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:46.469366    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:46.605763    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:46.622713    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:46.969075    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:47.105854    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:47.123047    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:47.469566    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 18:59:47.605830    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:47.622835    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:47.969281    1790 kapi.go:107] duration metric: took 1m37.004898375s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0307 18:59:48.105535    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:48.122737    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:48.605801    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:48.622685    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:49.105950    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:49.122748    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:49.605688    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:49.622620    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:50.106067    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:50.122728    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:50.605629    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:50.622665    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:51.105539    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:51.122598    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:51.605685    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:51.622431    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:52.105617    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:52.122575    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:52.605614    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:52.622519    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:53.105926    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:53.122617    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:53.605358    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:53.622522    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:54.104011    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:54.123198    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:54.605458    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:54.622602    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:55.105563    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:55.122482    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:55.605538    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:55.622436    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:56.105603    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:56.122518    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:56.605442    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:56.622261    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:57.103642    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:57.122556    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:57.605520    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:57.622512    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:58.105769    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:58.122191    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:58.605432    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:58.622458    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:59.105494    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:59.122507    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 18:59:59.605217    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 18:59:59.622397    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:00.105416    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:00.122301    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:00.605505    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:00.622485    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:01.105295    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:01.122391    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:01.605413    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:01.622449    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:02.105234    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:02.122490    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:02.605250    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:02.622463    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:03.105453    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:03.122412    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:03.605173    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:03.622200    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:04.105319    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:04.122379    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:04.605235    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:04.622582    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:05.105869    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:05.122568    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:05.605262    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:05.622341    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:06.105159    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:06.122181    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:06.605163    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:06.622295    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:07.105033    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:07.122349    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:07.604936    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:07.625991    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:08.105349    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:08.122109    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:08.605127    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:08.622170    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:09.105185    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:09.122186    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:09.604932    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:09.622083    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:10.105338    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:10.122165    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:10.604940    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:10.622057    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:11.105241    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:11.121871    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:11.604900    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:11.621962    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:12.104955    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:12.122063    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:12.604759    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:12.622020    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:13.105334    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:13.122028    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:13.604962    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:13.622000    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:14.105168    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:14.122149    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:14.604851    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:14.621724    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:15.105098    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:15.122088    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:15.604943    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:15.621912    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:16.103895    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:16.121928    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:16.604955    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:16.622022    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:17.104815    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:17.121873    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:17.606135    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:17.621820    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:18.110004    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:18.128002    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:18.604640    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:18.621797    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:19.104781    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:19.121809    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:19.604742    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:19.621219    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:20.104866    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:20.121969    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:20.604423    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:20.621808    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:21.105051    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:21.121481    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:21.604884    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:21.621812    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:22.104885    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:22.121781    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:22.604931    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:22.621862    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:23.104780    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:23.121804    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:23.604625    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:23.621722    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:24.105199    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:24.122242    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:24.604790    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:24.622258    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:25.105197    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:25.122321    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:25.604708    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:25.622009    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:26.105227    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:26.124748    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:26.604505    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:26.621677    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:27.104510    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:27.122184    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:27.604473    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:27.621552    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:28.104947    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:28.122009    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:28.604667    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:28.621878    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:29.104746    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:29.121848    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:29.606143    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:29.622274    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:30.104440    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:30.121779    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:30.604422    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:30.621342    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:31.104688    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:31.121681    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:31.604141    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:31.621881    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:32.104704    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:32.121528    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:32.604527    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:32.623061    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:33.104256    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:33.121423    1790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 19:00:33.604326    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:33.621391    1790 kapi.go:107] duration metric: took 2m23.007675375s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0307 19:00:34.103851    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:34.604278    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:35.104681    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:35.604452    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:36.104576    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:36.604156    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:37.104437    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:37.604328    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:38.104494    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:38.604098    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:39.104484    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:39.603326    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:40.104348    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:40.604377    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:41.104508    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:41.604177    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:42.104560    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:42.603329    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:43.108446    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:43.604152    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:44.103699    1790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 19:00:44.611666    1790 kapi.go:107] duration metric: took 2m30.011322084s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0307 19:00:44.614551    1790 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-935000 cluster.
	I0307 19:00:44.623483    1790 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0307 19:00:44.626465    1790 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0307 19:00:44.630541    1790 out.go:177] * Enabled addons: storage-provisioner-rancher, ingress-dns, cloud-spanner, inspektor-gadget, metrics-server, nvidia-device-plugin, yakd, volumesnapshots, storage-provisioner, default-storageclass, registry, csi-hostpath-driver, ingress, gcp-auth
	I0307 19:00:44.634482    1790 addons.go:505] duration metric: took 2m37.804702584s for enable addons: enabled=[storage-provisioner-rancher ingress-dns cloud-spanner inspektor-gadget metrics-server nvidia-device-plugin yakd volumesnapshots storage-provisioner default-storageclass registry csi-hostpath-driver ingress gcp-auth]
	I0307 19:00:44.634511    1790 start.go:245] waiting for cluster config update ...
	I0307 19:00:44.634521    1790 start.go:254] writing updated cluster config ...
	I0307 19:00:44.638566    1790 ssh_runner.go:195] Run: rm -f paused
	I0307 19:00:44.773851    1790 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0307 19:00:44.778519    1790 out.go:177] * Done! kubectl is now configured to use "addons-935000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 08 03:02:26 addons-935000 dockerd[1112]: time="2024-03-08T03:02:26.438505505Z" level=info msg="shim disconnected" id=82113a7c044e475055672a99fffe07fd15176e7bf523af7e2beaf38823d5b523 namespace=moby
	Mar 08 03:02:26 addons-935000 dockerd[1112]: time="2024-03-08T03:02:26.438644337Z" level=warning msg="cleaning up after shim disconnected" id=82113a7c044e475055672a99fffe07fd15176e7bf523af7e2beaf38823d5b523 namespace=moby
	Mar 08 03:02:26 addons-935000 dockerd[1112]: time="2024-03-08T03:02:26.438654129Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 08 03:02:26 addons-935000 dockerd[1106]: time="2024-03-08T03:02:26.438733753Z" level=info msg="ignoring event" container=82113a7c044e475055672a99fffe07fd15176e7bf523af7e2beaf38823d5b523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 08 03:02:27 addons-935000 dockerd[1112]: time="2024-03-08T03:02:27.184246051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 03:02:27 addons-935000 dockerd[1112]: time="2024-03-08T03:02:27.184284634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 03:02:27 addons-935000 dockerd[1112]: time="2024-03-08T03:02:27.184292676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 03:02:27 addons-935000 dockerd[1112]: time="2024-03-08T03:02:27.184326717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 03:02:27 addons-935000 dockerd[1106]: time="2024-03-08T03:02:27.206437890Z" level=info msg="ignoring event" container=fd908e104cd829306c0b4eaec1ac326b7bcd718deaf1b1bef8919e48da880595 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 08 03:02:27 addons-935000 dockerd[1112]: time="2024-03-08T03:02:27.206574597Z" level=info msg="shim disconnected" id=fd908e104cd829306c0b4eaec1ac326b7bcd718deaf1b1bef8919e48da880595 namespace=moby
	Mar 08 03:02:27 addons-935000 dockerd[1112]: time="2024-03-08T03:02:27.206612055Z" level=warning msg="cleaning up after shim disconnected" id=fd908e104cd829306c0b4eaec1ac326b7bcd718deaf1b1bef8919e48da880595 namespace=moby
	Mar 08 03:02:27 addons-935000 dockerd[1112]: time="2024-03-08T03:02:27.206617055Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 08 03:02:29 addons-935000 dockerd[1112]: time="2024-03-08T03:02:29.920715579Z" level=info msg="shim disconnected" id=b4b4e1951bce1c56ca21a1c1e7420eaaed032654ffdfed7c0ac63adf52522cff namespace=moby
	Mar 08 03:02:29 addons-935000 dockerd[1112]: time="2024-03-08T03:02:29.920746162Z" level=warning msg="cleaning up after shim disconnected" id=b4b4e1951bce1c56ca21a1c1e7420eaaed032654ffdfed7c0ac63adf52522cff namespace=moby
	Mar 08 03:02:29 addons-935000 dockerd[1112]: time="2024-03-08T03:02:29.920750287Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 08 03:02:29 addons-935000 dockerd[1106]: time="2024-03-08T03:02:29.920688121Z" level=info msg="ignoring event" container=b4b4e1951bce1c56ca21a1c1e7420eaaed032654ffdfed7c0ac63adf52522cff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 08 03:02:33 addons-935000 dockerd[1106]: time="2024-03-08T03:02:33.388264917Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=43f9463d11423394690a0c9eb831fd98e03a83ab52a76839be3647d95f6311f4
	Mar 08 03:02:33 addons-935000 dockerd[1106]: time="2024-03-08T03:02:33.429163923Z" level=info msg="ignoring event" container=43f9463d11423394690a0c9eb831fd98e03a83ab52a76839be3647d95f6311f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 08 03:02:33 addons-935000 dockerd[1112]: time="2024-03-08T03:02:33.429157089Z" level=info msg="shim disconnected" id=43f9463d11423394690a0c9eb831fd98e03a83ab52a76839be3647d95f6311f4 namespace=moby
	Mar 08 03:02:33 addons-935000 dockerd[1112]: time="2024-03-08T03:02:33.429356463Z" level=warning msg="cleaning up after shim disconnected" id=43f9463d11423394690a0c9eb831fd98e03a83ab52a76839be3647d95f6311f4 namespace=moby
	Mar 08 03:02:33 addons-935000 dockerd[1112]: time="2024-03-08T03:02:33.429368254Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 08 03:02:33 addons-935000 dockerd[1106]: time="2024-03-08T03:02:33.518237700Z" level=info msg="ignoring event" container=356f4d3eb6b948dd6acff11cd9ba8e79f1df1034a75faaff7a1f2ebef07dcb85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 08 03:02:33 addons-935000 dockerd[1112]: time="2024-03-08T03:02:33.518624114Z" level=info msg="shim disconnected" id=356f4d3eb6b948dd6acff11cd9ba8e79f1df1034a75faaff7a1f2ebef07dcb85 namespace=moby
	Mar 08 03:02:33 addons-935000 dockerd[1112]: time="2024-03-08T03:02:33.518657322Z" level=warning msg="cleaning up after shim disconnected" id=356f4d3eb6b948dd6acff11cd9ba8e79f1df1034a75faaff7a1f2ebef07dcb85 namespace=moby
	Mar 08 03:02:33 addons-935000 dockerd[1112]: time="2024-03-08T03:02:33.518661780Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fd908e104cd82       dd1b12fcb6097                                                                                                                10 seconds ago       Exited              hello-world-app           1                   4c394f6c8e764       hello-world-app-5d77478584-ggdtb
	e403ab131c81c       nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                                                30 seconds ago       Running             nginx                     0                   ca67dc6b720e7       nginx
	edb69fc2e6453       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        40 seconds ago       Running             headlamp                  0                   7e2a1b38b4896       headlamp-7ddfbb94ff-p7n25
	7e020ca53e2c3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                 About a minute ago   Running             gcp-auth                  0                   e16b64458cb7b       gcp-auth-5f6b4f85fd-c956m
	6c1a4c9a076df       1a024e390dd05                                                                                                                3 minutes ago        Exited              patch                     1                   31310bbd2f6cd       ingress-nginx-admission-patch-p4927
	0f7a7f59d540d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334   3 minutes ago        Exited              create                    0                   87a524cc34fdd       ingress-nginx-admission-create-fxwtv
	3dc5801393cda       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        3 minutes ago        Running             yakd                      0                   4355ffd3dd427       yakd-dashboard-9947fc6bf-przgj
	8f3447448a931       ba04bb24b9575                                                                                                                4 minutes ago        Running             storage-provisioner       0                   0e3a07f1717ec       storage-provisioner
	7929c3f4a1b8b       97e04611ad434                                                                                                                4 minutes ago        Running             coredns                   0                   06c411b7cf5c9       coredns-5dd5756b68-jdvxt
	2a89b3954b3a2       3ca3ca488cf13                                                                                                                4 minutes ago        Running             kube-proxy                0                   6502ecde9d44c       kube-proxy-prt8x
	32d6186d196de       05c284c929889                                                                                                                4 minutes ago        Running             kube-scheduler            0                   bd648dce26339       kube-scheduler-addons-935000
	0b67fc96690d9       9cdd6470f48c8                                                                                                                4 minutes ago        Running             etcd                      0                   df2948a753423       etcd-addons-935000
	113af40383b33       9961cbceaf234                                                                                                                4 minutes ago        Running             kube-controller-manager   0                   cffcadbad554d       kube-controller-manager-addons-935000
	ee1a7ad36679c       04b4c447bb9d4                                                                                                                4 minutes ago        Running             kube-apiserver            0                   60cd5dde20e34       kube-apiserver-addons-935000
	
	
	==> coredns [7929c3f4a1b8] <==
	[INFO] 10.244.0.20:34275 - 32469 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000072625s
	[INFO] 10.244.0.20:34275 - 31405 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003825s
	[INFO] 10.244.0.20:34275 - 41991 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00012604s
	[INFO] 10.244.0.20:34275 - 58708 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000090666s
	[INFO] 10.244.0.20:47180 - 61732 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000064916s
	[INFO] 10.244.0.20:45385 - 30254 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000014334s
	[INFO] 10.244.0.20:47180 - 49669 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000016s
	[INFO] 10.244.0.20:45385 - 5324 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000011249s
	[INFO] 10.244.0.20:47180 - 2485 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000024125s
	[INFO] 10.244.0.20:45385 - 58619 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010958s
	[INFO] 10.244.0.20:47180 - 18176 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015249s
	[INFO] 10.244.0.20:45385 - 7340 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000014459s
	[INFO] 10.244.0.20:47180 - 6720 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011083s
	[INFO] 10.244.0.20:45385 - 6128 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000014333s
	[INFO] 10.244.0.20:47180 - 51236 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000014375s
	[INFO] 10.244.0.20:45385 - 61222 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000015667s
	[INFO] 10.244.0.20:45385 - 15473 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000014s
	[INFO] 10.244.0.20:47180 - 46469 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000009542s
	[INFO] 10.244.0.20:39749 - 43952 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059916s
	[INFO] 10.244.0.20:39749 - 39501 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000030958s
	[INFO] 10.244.0.20:39749 - 23307 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000020417s
	[INFO] 10.244.0.20:39749 - 29708 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000020791s
	[INFO] 10.244.0.20:39749 - 58438 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000016917s
	[INFO] 10.244.0.20:39749 - 36361 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000018917s
	[INFO] 10.244.0.20:39749 - 32948 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000020375s
	
	
	==> describe nodes <==
	Name:               addons-935000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-935000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=addons-935000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T18_57_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-935000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 02:57:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-935000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:02:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:02:29 +0000   Fri, 08 Mar 2024 02:57:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:02:29 +0000   Fri, 08 Mar 2024 02:57:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:02:29 +0000   Fri, 08 Mar 2024 02:57:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:02:29 +0000   Fri, 08 Mar 2024 02:57:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-935000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a86c418ffd74ca686859a1ecd6fa445
	  System UUID:                8a86c418ffd74ca686859a1ecd6fa445
	  Boot ID:                    4e0a1ebb-28b8-404d-b371-012026052de6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-ggdtb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  gcp-auth                    gcp-auth-5f6b4f85fd-c956m                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  headlamp                    headlamp-7ddfbb94ff-p7n25                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 coredns-5dd5756b68-jdvxt                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m30s
	  kube-system                 etcd-addons-935000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-apiserver-addons-935000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-controller-manager-addons-935000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-proxy-prt8x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 kube-scheduler-addons-935000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-przgj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m29s  kube-proxy       
	  Normal  Starting                 4m43s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m43s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m43s  kubelet          Node addons-935000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s  kubelet          Node addons-935000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s  kubelet          Node addons-935000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m40s  kubelet          Node addons-935000 status is now: NodeReady
	  Normal  RegisteredNode           4m31s  node-controller  Node addons-935000 event: Registered Node addons-935000 in Controller
	
	
	==> dmesg <==
	[  +7.207739] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.468959] kauditd_printk_skb: 9 callbacks suppressed
	[Mar 8 02:59] kauditd_printk_skb: 7 callbacks suppressed
	[  +9.068358] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.435065] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.103448] kauditd_printk_skb: 35 callbacks suppressed
	[  +7.206357] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.939111] kauditd_printk_skb: 2 callbacks suppressed
	[Mar 8 03:00] kauditd_printk_skb: 12 callbacks suppressed
	[ +16.995058] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.290686] kauditd_printk_skb: 18 callbacks suppressed
	[ +22.316872] kauditd_printk_skb: 1 callbacks suppressed
	[Mar 8 03:01] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.051752] kauditd_printk_skb: 37 callbacks suppressed
	[  +7.268104] kauditd_printk_skb: 3 callbacks suppressed
	[ +10.595952] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.930998] kauditd_printk_skb: 27 callbacks suppressed
	[  +8.916246] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.749093] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.164919] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.746997] kauditd_printk_skb: 12 callbacks suppressed
	[Mar 8 03:02] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.215205] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.747328] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.016005] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [0b67fc96690d] <==
	{"level":"info","ts":"2024-03-08T02:57:50.23821Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c46d288d2fcb0590","initial-advertise-peer-urls":["https://192.168.105.2:2380"],"listen-peer-urls":["https://192.168.105.2:2380"],"advertise-client-urls":["https://192.168.105.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T02:57:50.242059Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T02:57:50.628054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-08T02:57:50.628126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-08T02:57:50.628157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-03-08T02:57:50.628179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-03-08T02:57:50.628217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-03-08T02:57:50.628236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-03-08T02:57:50.628258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-03-08T02:57:50.636123Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-935000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T02:57:50.636204Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T02:57:50.63666Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T02:57:50.636706Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T02:57:50.637045Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-03-08T02:57:50.637351Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T02:57:50.637422Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T02:57:50.637477Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T02:57:50.640901Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T02:57:50.640926Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T02:57:50.642272Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T02:58:36.865493Z","caller":"traceutil/trace.go:171","msg":"trace[1366116033] transaction","detail":"{read_only:false; response_revision:874; number_of_response:1; }","duration":"145.766318ms","start":"2024-03-08T02:58:36.719716Z","end":"2024-03-08T02:58:36.865483Z","steps":["trace[1366116033] 'process raft request'  (duration: 145.687152ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T02:59:07.051999Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.01353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-03-08T02:59:07.053314Z","caller":"traceutil/trace.go:171","msg":"trace[1556614787] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:979; }","duration":"155.333067ms","start":"2024-03-08T02:59:06.897973Z","end":"2024-03-08T02:59:07.053306Z","steps":["trace[1556614787] 'range keys from in-memory index tree'  (duration: 153.970821ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:01:05.316392Z","caller":"traceutil/trace.go:171","msg":"trace[542544222] transaction","detail":"{read_only:false; response_revision:1364; number_of_response:1; }","duration":"201.90903ms","start":"2024-03-08T03:01:05.11447Z","end":"2024-03-08T03:01:05.316379Z","steps":["trace[542544222] 'process raft request'  (duration: 201.806697ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:01:57.066476Z","caller":"traceutil/trace.go:171","msg":"trace[1459624538] transaction","detail":"{read_only:false; response_revision:1745; number_of_response:1; }","duration":"187.558649ms","start":"2024-03-08T03:01:56.878906Z","end":"2024-03-08T03:01:57.066465Z","steps":["trace[1459624538] 'process raft request'  (duration: 187.458399ms)"],"step_count":1}
	
	
	==> gcp-auth [7e020ca53e2c] <==
	2024/03/08 03:00:43 GCP Auth Webhook started!
	2024/03/08 03:00:54 Ready to marshal response ...
	2024/03/08 03:00:54 Ready to write response ...
	2024/03/08 03:00:58 Ready to marshal response ...
	2024/03/08 03:00:58 Ready to write response ...
	2024/03/08 03:01:18 Ready to marshal response ...
	2024/03/08 03:01:18 Ready to write response ...
	2024/03/08 03:01:18 Ready to marshal response ...
	2024/03/08 03:01:18 Ready to write response ...
	2024/03/08 03:01:27 Ready to marshal response ...
	2024/03/08 03:01:27 Ready to write response ...
	2024/03/08 03:01:30 Ready to marshal response ...
	2024/03/08 03:01:30 Ready to write response ...
	2024/03/08 03:01:52 Ready to marshal response ...
	2024/03/08 03:01:52 Ready to write response ...
	2024/03/08 03:01:52 Ready to marshal response ...
	2024/03/08 03:01:52 Ready to write response ...
	2024/03/08 03:01:52 Ready to marshal response ...
	2024/03/08 03:01:52 Ready to write response ...
	2024/03/08 03:02:04 Ready to marshal response ...
	2024/03/08 03:02:04 Ready to write response ...
	2024/03/08 03:02:14 Ready to marshal response ...
	2024/03/08 03:02:14 Ready to write response ...
	
	
	==> kernel <==
	 03:02:37 up 5 min,  0 users,  load average: 0.50, 0.53, 0.26
	Linux addons-935000 5.10.207 #1 SMP PREEMPT Thu Feb 22 23:40:42 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ee1a7ad36679] <==
	I0308 03:01:46.332493       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.332516       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 03:01:46.339228       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.339245       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 03:01:46.344367       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.344391       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 03:01:46.344419       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.344430       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 03:01:46.354014       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.354032       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 03:01:46.355903       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.355919       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 03:01:46.361431       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.361453       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0308 03:01:47.345260       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0308 03:01:47.354179       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0308 03:01:47.367795       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0308 03:01:52.106589       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.28.86"}
	I0308 03:01:52.400254       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0308 03:02:04.365869       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0308 03:02:04.458610       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.249.210"}
	I0308 03:02:14.674431       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.74.209"}
	I0308 03:02:15.250117       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0308 03:02:15.252065       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0308 03:02:16.258742       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [113af40383b3] <==
	W0308 03:02:17.112518       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:02:17.112538       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:02:17.528059       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:02:17.528084       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:02:19.004802       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:02:19.004826       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:02:23.191479       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:02:23.191500       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:02:24.796921       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:02:24.796942       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0308 03:02:25.279738       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0308 03:02:25.804951       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:02:25.804968       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0308 03:02:27.147422       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="25.5µs"
	I0308 03:02:28.158880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="37.333µs"
	I0308 03:02:29.177872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.25µs"
	I0308 03:02:30.375135       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="2.042µs"
	I0308 03:02:30.375899       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0308 03:02:30.375941       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0308 03:02:31.909514       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:02:31.909537       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0308 03:02:36.229546       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0308 03:02:36.229561       1 shared_informer.go:318] Caches are synced for resource quota
	I0308 03:02:36.551298       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0308 03:02:36.551316       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [2a89b3954b3a] <==
	I0308 02:58:07.679801       1 server_others.go:69] "Using iptables proxy"
	I0308 02:58:07.695287       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0308 02:58:07.879177       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 02:58:07.879197       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 02:58:07.879996       1 server_others.go:152] "Using iptables Proxier"
	I0308 02:58:07.880045       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 02:58:07.880189       1 server.go:846] "Version info" version="v1.28.4"
	I0308 02:58:07.880195       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 02:58:07.880812       1 config.go:188] "Starting service config controller"
	I0308 02:58:07.880832       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 02:58:07.880851       1 config.go:97] "Starting endpoint slice config controller"
	I0308 02:58:07.880855       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 02:58:07.881242       1 config.go:315] "Starting node config controller"
	I0308 02:58:07.881245       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 02:58:07.981700       1 shared_informer.go:318] Caches are synced for node config
	I0308 02:58:07.981726       1 shared_informer.go:318] Caches are synced for service config
	I0308 02:58:07.981748       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [32d6186d196d] <==
	W0308 02:57:51.536745       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 02:57:51.536907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0308 02:57:51.536756       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 02:57:51.536945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 02:57:51.536767       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0308 02:57:51.536993       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 02:57:51.536776       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 02:57:51.537032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 02:57:51.536787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0308 02:57:51.537081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0308 02:57:51.536819       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 02:57:51.537115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 02:57:52.429076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 02:57:52.429096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 02:57:52.474660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0308 02:57:52.474677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 02:57:52.487431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0308 02:57:52.487505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0308 02:57:52.496214       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 02:57:52.496256       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 02:57:52.509107       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 02:57:52.509210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0308 02:57:52.516803       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0308 02:57:52.516869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0308 02:57:52.925368       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 03:02:26 addons-935000 kubelet[2378]: E0308 03:02:26.622327    2378 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(98a50817-d269-477c-a4bf-9a3d91183ab8)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="98a50817-d269-477c-a4bf-9a3d91183ab8"
	Mar 08 03:02:27 addons-935000 kubelet[2378]: I0308 03:02:27.141723    2378 scope.go:117] "RemoveContainer" containerID="82113a7c044e475055672a99fffe07fd15176e7bf523af7e2beaf38823d5b523"
	Mar 08 03:02:28 addons-935000 kubelet[2378]: I0308 03:02:28.150447    2378 scope.go:117] "RemoveContainer" containerID="82113a7c044e475055672a99fffe07fd15176e7bf523af7e2beaf38823d5b523"
	Mar 08 03:02:28 addons-935000 kubelet[2378]: I0308 03:02:28.150643    2378 scope.go:117] "RemoveContainer" containerID="fd908e104cd829306c0b4eaec1ac326b7bcd718deaf1b1bef8919e48da880595"
	Mar 08 03:02:28 addons-935000 kubelet[2378]: E0308 03:02:28.150827    2378 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-ggdtb_default(9b043578-96c8-4848-97cb-b76a01311377)\"" pod="default/hello-world-app-5d77478584-ggdtb" podUID="9b043578-96c8-4848-97cb-b76a01311377"
	Mar 08 03:02:29 addons-935000 kubelet[2378]: I0308 03:02:29.166732    2378 scope.go:117] "RemoveContainer" containerID="fd908e104cd829306c0b4eaec1ac326b7bcd718deaf1b1bef8919e48da880595"
	Mar 08 03:02:29 addons-935000 kubelet[2378]: E0308 03:02:29.166861    2378 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-ggdtb_default(9b043578-96c8-4848-97cb-b76a01311377)\"" pod="default/hello-world-app-5d77478584-ggdtb" podUID="9b043578-96c8-4848-97cb-b76a01311377"
	Mar 08 03:02:29 addons-935000 kubelet[2378]: I0308 03:02:29.995132    2378 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lft64\" (UniqueName: \"kubernetes.io/projected/98a50817-d269-477c-a4bf-9a3d91183ab8-kube-api-access-lft64\") pod \"98a50817-d269-477c-a4bf-9a3d91183ab8\" (UID: \"98a50817-d269-477c-a4bf-9a3d91183ab8\") "
	Mar 08 03:02:29 addons-935000 kubelet[2378]: I0308 03:02:29.998013    2378 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a50817-d269-477c-a4bf-9a3d91183ab8-kube-api-access-lft64" (OuterVolumeSpecName: "kube-api-access-lft64") pod "98a50817-d269-477c-a4bf-9a3d91183ab8" (UID: "98a50817-d269-477c-a4bf-9a3d91183ab8"). InnerVolumeSpecName "kube-api-access-lft64". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 08 03:02:30 addons-935000 kubelet[2378]: I0308 03:02:30.095296    2378 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lft64\" (UniqueName: \"kubernetes.io/projected/98a50817-d269-477c-a4bf-9a3d91183ab8-kube-api-access-lft64\") on node \"addons-935000\" DevicePath \"\""
	Mar 08 03:02:30 addons-935000 kubelet[2378]: I0308 03:02:30.173916    2378 scope.go:117] "RemoveContainer" containerID="5cca1d44682ddd5b5ca26ecc8b7fcb089adfe1ebea947c4a925c496c24660c5b"
	Mar 08 03:02:31 addons-935000 kubelet[2378]: I0308 03:02:31.624887    2378 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="56e0ab9b-61d7-40ae-bb9c-5948852b554f" path="/var/lib/kubelet/pods/56e0ab9b-61d7-40ae-bb9c-5948852b554f/volumes"
	Mar 08 03:02:31 addons-935000 kubelet[2378]: I0308 03:02:31.625075    2378 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="98a50817-d269-477c-a4bf-9a3d91183ab8" path="/var/lib/kubelet/pods/98a50817-d269-477c-a4bf-9a3d91183ab8/volumes"
	Mar 08 03:02:31 addons-935000 kubelet[2378]: I0308 03:02:31.625250    2378 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e2a2d212-5805-415c-b2c4-101f6243ebf5" path="/var/lib/kubelet/pods/e2a2d212-5805-415c-b2c4-101f6243ebf5/volumes"
	Mar 08 03:02:33 addons-935000 kubelet[2378]: I0308 03:02:33.619066    2378 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7knd5\" (UniqueName: \"kubernetes.io/projected/1a015766-faeb-40a8-a1a4-3f01c940170f-kube-api-access-7knd5\") pod \"1a015766-faeb-40a8-a1a4-3f01c940170f\" (UID: \"1a015766-faeb-40a8-a1a4-3f01c940170f\") "
	Mar 08 03:02:33 addons-935000 kubelet[2378]: I0308 03:02:33.619101    2378 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a015766-faeb-40a8-a1a4-3f01c940170f-webhook-cert\") pod \"1a015766-faeb-40a8-a1a4-3f01c940170f\" (UID: \"1a015766-faeb-40a8-a1a4-3f01c940170f\") "
	Mar 08 03:02:33 addons-935000 kubelet[2378]: I0308 03:02:33.619749    2378 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a015766-faeb-40a8-a1a4-3f01c940170f-kube-api-access-7knd5" (OuterVolumeSpecName: "kube-api-access-7knd5") pod "1a015766-faeb-40a8-a1a4-3f01c940170f" (UID: "1a015766-faeb-40a8-a1a4-3f01c940170f"). InnerVolumeSpecName "kube-api-access-7knd5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 08 03:02:33 addons-935000 kubelet[2378]: I0308 03:02:33.623672    2378 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a015766-faeb-40a8-a1a4-3f01c940170f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1a015766-faeb-40a8-a1a4-3f01c940170f" (UID: "1a015766-faeb-40a8-a1a4-3f01c940170f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 08 03:02:33 addons-935000 kubelet[2378]: I0308 03:02:33.625719    2378 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1a015766-faeb-40a8-a1a4-3f01c940170f" path="/var/lib/kubelet/pods/1a015766-faeb-40a8-a1a4-3f01c940170f/volumes"
	Mar 08 03:02:33 addons-935000 kubelet[2378]: I0308 03:02:33.720150    2378 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a015766-faeb-40a8-a1a4-3f01c940170f-webhook-cert\") on node \"addons-935000\" DevicePath \"\""
	Mar 08 03:02:33 addons-935000 kubelet[2378]: I0308 03:02:33.720165    2378 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7knd5\" (UniqueName: \"kubernetes.io/projected/1a015766-faeb-40a8-a1a4-3f01c940170f-kube-api-access-7knd5\") on node \"addons-935000\" DevicePath \"\""
	Mar 08 03:02:34 addons-935000 kubelet[2378]: I0308 03:02:34.199436    2378 scope.go:117] "RemoveContainer" containerID="43f9463d11423394690a0c9eb831fd98e03a83ab52a76839be3647d95f6311f4"
	Mar 08 03:02:34 addons-935000 kubelet[2378]: I0308 03:02:34.207647    2378 scope.go:117] "RemoveContainer" containerID="43f9463d11423394690a0c9eb831fd98e03a83ab52a76839be3647d95f6311f4"
	Mar 08 03:02:34 addons-935000 kubelet[2378]: E0308 03:02:34.207876    2378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 43f9463d11423394690a0c9eb831fd98e03a83ab52a76839be3647d95f6311f4" containerID="43f9463d11423394690a0c9eb831fd98e03a83ab52a76839be3647d95f6311f4"
	Mar 08 03:02:34 addons-935000 kubelet[2378]: I0308 03:02:34.207896    2378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"43f9463d11423394690a0c9eb831fd98e03a83ab52a76839be3647d95f6311f4"} err="failed to get container status \"43f9463d11423394690a0c9eb831fd98e03a83ab52a76839be3647d95f6311f4\": rpc error: code = Unknown desc = Error response from daemon: No such container: 43f9463d11423394690a0c9eb831fd98e03a83ab52a76839be3647d95f6311f4"
	
	
	==> storage-provisioner [8f3447448a93] <==
	I0308 02:58:11.100210       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0308 02:58:11.105486       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0308 02:58:11.105514       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0308 02:58:11.108173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0308 02:58:11.108326       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-935000_c78a4c8d-5e6f-4075-8310-d50a5eb4a2af!
	I0308 02:58:11.108770       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b173e407-40cc-4b76-8d8d-373975642d2e", APIVersion:"v1", ResourceVersion:"741", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-935000_c78a4c8d-5e6f-4075-8310-d50a5eb4a2af became leader
	I0308 02:58:11.208874       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-935000_c78a4c8d-5e6f-4075-8310-d50a5eb4a2af!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-935000 -n addons-935000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-935000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (34.05s)

                                                
                                    
x
+
TestCertOptions (10.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-168000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-168000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.931629833s)

                                                
                                                
-- stdout --
	* [cert-options-168000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-168000" primary control-plane node in "cert-options-168000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-168000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-168000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-168000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-168000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-168000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.812458ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-168000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-168000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-168000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-168000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-168000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-168000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.609542ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-168000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-168000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-168000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-168000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-168000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-07 19:36:49.786441 -0800 PST m=+2471.112088167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-168000 -n cert-options-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-168000 -n cert-options-168000: exit status 7 (32.504834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-168000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-168000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-168000
--- FAIL: TestCertOptions (10.22s)

                                                
                                    
x
+
TestCertExpiration (195.22s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-988000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-988000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.82498475s)

                                                
                                                
-- stdout --
	* [cert-expiration-988000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-988000" primary control-plane node in "cert-expiration-988000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-988000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-988000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-988000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-988000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-988000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.230482209s)

                                                
                                                
-- stdout --
	* [cert-expiration-988000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-988000" primary control-plane node in "cert-expiration-988000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-988000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-988000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-988000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-988000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-988000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-988000" primary control-plane node in "cert-expiration-988000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-988000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-988000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-988000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-07 19:39:49.769212 -0800 PST m=+2651.102226001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-988000 -n cert-expiration-988000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-988000 -n cert-expiration-988000: exit status 7 (58.912042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-988000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-988000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-988000
--- FAIL: TestCertExpiration (195.22s)

                                                
                                    
x
+
TestDockerFlags (10.01s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-034000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-034000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.745095375s)

                                                
                                                
-- stdout --
	* [docker-flags-034000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-034000" primary control-plane node in "docker-flags-034000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-034000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:36:29.724182    4436 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:36:29.724308    4436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:36:29.724311    4436 out.go:304] Setting ErrFile to fd 2...
	I0307 19:36:29.724314    4436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:36:29.724456    4436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:36:29.725497    4436 out.go:298] Setting JSON to false
	I0307 19:36:29.741584    4436 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3961,"bootTime":1709865028,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:36:29.741647    4436 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:36:29.747025    4436 out.go:177] * [docker-flags-034000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:36:29.759988    4436 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:36:29.755075    4436 notify.go:220] Checking for updates...
	I0307 19:36:29.765990    4436 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:36:29.769986    4436 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:36:29.774029    4436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:36:29.776933    4436 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:36:29.779986    4436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:36:29.783455    4436 config.go:182] Loaded profile config "force-systemd-flag-741000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:36:29.783537    4436 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:36:29.783583    4436 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:36:29.788002    4436 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:36:29.795025    4436 start.go:297] selected driver: qemu2
	I0307 19:36:29.795032    4436 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:36:29.795039    4436 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:36:29.797381    4436 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:36:29.799977    4436 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:36:29.803080    4436 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0307 19:36:29.803114    4436 cni.go:84] Creating CNI manager for ""
	I0307 19:36:29.803120    4436 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:36:29.803128    4436 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 19:36:29.803157    4436 start.go:340] cluster config:
	{Name:docker-flags-034000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:36:29.807842    4436 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:36:29.816063    4436 out.go:177] * Starting "docker-flags-034000" primary control-plane node in "docker-flags-034000" cluster
	I0307 19:36:29.820000    4436 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:36:29.820018    4436 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:36:29.820034    4436 cache.go:56] Caching tarball of preloaded images
	I0307 19:36:29.820103    4436 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:36:29.820110    4436 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:36:29.820177    4436 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/docker-flags-034000/config.json ...
	I0307 19:36:29.820190    4436 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/docker-flags-034000/config.json: {Name:mk4d9950a818aeb1c7e7251bfa8d17f92186bb1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:36:29.820438    4436 start.go:360] acquireMachinesLock for docker-flags-034000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:36:29.820492    4436 start.go:364] duration metric: took 31.875µs to acquireMachinesLock for "docker-flags-034000"
	I0307 19:36:29.820505    4436 start.go:93] Provisioning new machine with config: &{Name:docker-flags-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:36:29.820544    4436 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:36:29.828011    4436 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 19:36:29.847429    4436 start.go:159] libmachine.API.Create for "docker-flags-034000" (driver="qemu2")
	I0307 19:36:29.847463    4436 client.go:168] LocalClient.Create starting
	I0307 19:36:29.847534    4436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:36:29.847567    4436 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:29.847578    4436 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:29.847626    4436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:36:29.847651    4436 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:29.847658    4436 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:29.848066    4436 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:36:29.985850    4436 main.go:141] libmachine: Creating SSH key...
	I0307 19:36:30.025034    4436 main.go:141] libmachine: Creating Disk image...
	I0307 19:36:30.025045    4436 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:36:30.025192    4436 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/disk.qcow2
	I0307 19:36:30.037647    4436 main.go:141] libmachine: STDOUT: 
	I0307 19:36:30.037669    4436 main.go:141] libmachine: STDERR: 
	I0307 19:36:30.037730    4436 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/disk.qcow2 +20000M
	I0307 19:36:30.048379    4436 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:36:30.048401    4436 main.go:141] libmachine: STDERR: 
	I0307 19:36:30.048414    4436 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/disk.qcow2
	I0307 19:36:30.048419    4436 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:36:30.048456    4436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:4c:f1:c6:42:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/disk.qcow2
	I0307 19:36:30.050120    4436 main.go:141] libmachine: STDOUT: 
	I0307 19:36:30.050137    4436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:36:30.050156    4436 client.go:171] duration metric: took 202.693917ms to LocalClient.Create
	I0307 19:36:32.052305    4436 start.go:128] duration metric: took 2.231823333s to createHost
	I0307 19:36:32.052355    4436 start.go:83] releasing machines lock for "docker-flags-034000", held for 2.231943042s
	W0307 19:36:32.052410    4436 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:36:32.072618    4436 out.go:177] * Deleting "docker-flags-034000" in qemu2 ...
	W0307 19:36:32.094889    4436 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:36:32.094913    4436 start.go:728] Will try again in 5 seconds ...
	I0307 19:36:37.096911    4436 start.go:360] acquireMachinesLock for docker-flags-034000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:36:37.114319    4436 start.go:364] duration metric: took 17.264875ms to acquireMachinesLock for "docker-flags-034000"
	I0307 19:36:37.114450    4436 start.go:93] Provisioning new machine with config: &{Name:docker-flags-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:36:37.114750    4436 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:36:37.127375    4436 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 19:36:37.175279    4436 start.go:159] libmachine.API.Create for "docker-flags-034000" (driver="qemu2")
	I0307 19:36:37.175332    4436 client.go:168] LocalClient.Create starting
	I0307 19:36:37.175461    4436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:36:37.175523    4436 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:37.175541    4436 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:37.175608    4436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:36:37.175657    4436 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:37.175671    4436 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:37.176223    4436 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:36:37.325322    4436 main.go:141] libmachine: Creating SSH key...
	I0307 19:36:37.362657    4436 main.go:141] libmachine: Creating Disk image...
	I0307 19:36:37.362662    4436 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:36:37.362862    4436 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/disk.qcow2
	I0307 19:36:37.375267    4436 main.go:141] libmachine: STDOUT: 
	I0307 19:36:37.375290    4436 main.go:141] libmachine: STDERR: 
	I0307 19:36:37.375358    4436 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/disk.qcow2 +20000M
	I0307 19:36:37.386192    4436 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:36:37.386214    4436 main.go:141] libmachine: STDERR: 
	I0307 19:36:37.386226    4436 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/disk.qcow2
	I0307 19:36:37.386230    4436 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:36:37.386263    4436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:52:0c:16:ca:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/docker-flags-034000/disk.qcow2
	I0307 19:36:37.387933    4436 main.go:141] libmachine: STDOUT: 
	I0307 19:36:37.387949    4436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:36:37.387960    4436 client.go:171] duration metric: took 212.631125ms to LocalClient.Create
	I0307 19:36:39.390135    4436 start.go:128] duration metric: took 2.27542825s to createHost
	I0307 19:36:39.390227    4436 start.go:83] releasing machines lock for "docker-flags-034000", held for 2.275968125s
	W0307 19:36:39.390576    4436 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-034000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-034000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:36:39.402134    4436 out.go:177] 
	W0307 19:36:39.412388    4436 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:36:39.412415    4436 out.go:239] * 
	* 
	W0307 19:36:39.414799    4436 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:36:39.423061    4436 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-034000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-034000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-034000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.739625ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-034000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-034000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-034000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-034000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-034000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-034000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-034000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-034000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-034000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.859334ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-034000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-034000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-034000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-034000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-034000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-034000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-07 19:36:39.566213 -0800 PST m=+2460.891441459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-034000 -n docker-flags-034000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-034000 -n docker-flags-034000: exit status 7 (31.044458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-034000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-034000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-034000
--- FAIL: TestDockerFlags (10.01s)

                                                
                                    
x
+
TestForceSystemdFlag (10.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-741000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-741000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.94029625s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-741000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-741000" primary control-plane node in "force-systemd-flag-741000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-741000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:36:24.587511    4414 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:36:24.587656    4414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:36:24.587660    4414 out.go:304] Setting ErrFile to fd 2...
	I0307 19:36:24.587662    4414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:36:24.587791    4414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:36:24.588846    4414 out.go:298] Setting JSON to false
	I0307 19:36:24.604916    4414 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3956,"bootTime":1709865028,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:36:24.604978    4414 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:36:24.611794    4414 out.go:177] * [force-systemd-flag-741000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:36:24.617786    4414 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:36:24.617876    4414 notify.go:220] Checking for updates...
	I0307 19:36:24.620845    4414 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:36:24.623714    4414 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:36:24.627780    4414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:36:24.629406    4414 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:36:24.636750    4414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:36:24.640884    4414 config.go:182] Loaded profile config "force-systemd-env-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:36:24.640955    4414 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:36:24.641010    4414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:36:24.644722    4414 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:36:24.651593    4414 start.go:297] selected driver: qemu2
	I0307 19:36:24.651599    4414 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:36:24.651604    4414 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:36:24.653797    4414 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:36:24.656816    4414 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:36:24.660824    4414 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 19:36:24.660863    4414 cni.go:84] Creating CNI manager for ""
	I0307 19:36:24.660871    4414 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:36:24.660882    4414 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 19:36:24.660919    4414 start.go:340] cluster config:
	{Name:force-systemd-flag-741000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-741000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:36:24.665566    4414 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:36:24.672751    4414 out.go:177] * Starting "force-systemd-flag-741000" primary control-plane node in "force-systemd-flag-741000" cluster
	I0307 19:36:24.676787    4414 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:36:24.676803    4414 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:36:24.676816    4414 cache.go:56] Caching tarball of preloaded images
	I0307 19:36:24.676872    4414 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:36:24.676877    4414 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:36:24.676951    4414 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/force-systemd-flag-741000/config.json ...
	I0307 19:36:24.676962    4414 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/force-systemd-flag-741000/config.json: {Name:mkf3ad25203054781da548cc3704d829628f8c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:36:24.677182    4414 start.go:360] acquireMachinesLock for force-systemd-flag-741000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:36:24.677216    4414 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "force-systemd-flag-741000"
	I0307 19:36:24.677227    4414 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-741000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-741000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:36:24.677261    4414 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:36:24.681770    4414 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 19:36:24.698967    4414 start.go:159] libmachine.API.Create for "force-systemd-flag-741000" (driver="qemu2")
	I0307 19:36:24.698993    4414 client.go:168] LocalClient.Create starting
	I0307 19:36:24.699048    4414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:36:24.699082    4414 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:24.699090    4414 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:24.699141    4414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:36:24.699161    4414 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:24.699167    4414 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:24.699507    4414 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:36:24.839380    4414 main.go:141] libmachine: Creating SSH key...
	I0307 19:36:24.944346    4414 main.go:141] libmachine: Creating Disk image...
	I0307 19:36:24.944355    4414 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:36:24.944533    4414 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/disk.qcow2
	I0307 19:36:24.957075    4414 main.go:141] libmachine: STDOUT: 
	I0307 19:36:24.957096    4414 main.go:141] libmachine: STDERR: 
	I0307 19:36:24.957144    4414 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/disk.qcow2 +20000M
	I0307 19:36:24.967862    4414 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:36:24.967878    4414 main.go:141] libmachine: STDERR: 
	I0307 19:36:24.967900    4414 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/disk.qcow2
	I0307 19:36:24.967903    4414 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:36:24.967934    4414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:dd:79:39:3e:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/disk.qcow2
	I0307 19:36:24.969643    4414 main.go:141] libmachine: STDOUT: 
	I0307 19:36:24.969659    4414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:36:24.969676    4414 client.go:171] duration metric: took 270.688375ms to LocalClient.Create
	I0307 19:36:26.971852    4414 start.go:128] duration metric: took 2.294664583s to createHost
	I0307 19:36:26.971961    4414 start.go:83] releasing machines lock for "force-systemd-flag-741000", held for 2.294824625s
	W0307 19:36:26.972015    4414 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:36:26.991029    4414 out.go:177] * Deleting "force-systemd-flag-741000" in qemu2 ...
	W0307 19:36:27.011118    4414 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:36:27.011142    4414 start.go:728] Will try again in 5 seconds ...
	I0307 19:36:32.013133    4414 start.go:360] acquireMachinesLock for force-systemd-flag-741000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:36:32.052444    4414 start.go:364] duration metric: took 39.136ms to acquireMachinesLock for "force-systemd-flag-741000"
	I0307 19:36:32.052758    4414 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-741000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-741000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:36:32.053010    4414 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:36:32.063622    4414 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 19:36:32.113082    4414 start.go:159] libmachine.API.Create for "force-systemd-flag-741000" (driver="qemu2")
	I0307 19:36:32.113134    4414 client.go:168] LocalClient.Create starting
	I0307 19:36:32.113251    4414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:36:32.113314    4414 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:32.113333    4414 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:32.113389    4414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:36:32.113432    4414 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:32.113463    4414 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:32.113964    4414 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:36:32.263496    4414 main.go:141] libmachine: Creating SSH key...
	I0307 19:36:32.417157    4414 main.go:141] libmachine: Creating Disk image...
	I0307 19:36:32.417165    4414 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:36:32.417371    4414 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/disk.qcow2
	I0307 19:36:32.430347    4414 main.go:141] libmachine: STDOUT: 
	I0307 19:36:32.430375    4414 main.go:141] libmachine: STDERR: 
	I0307 19:36:32.430434    4414 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/disk.qcow2 +20000M
	I0307 19:36:32.441398    4414 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:36:32.441414    4414 main.go:141] libmachine: STDERR: 
	I0307 19:36:32.441426    4414 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/disk.qcow2
	I0307 19:36:32.441431    4414 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:36:32.441471    4414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:69:2e:43:15:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-flag-741000/disk.qcow2
	I0307 19:36:32.443183    4414 main.go:141] libmachine: STDOUT: 
	I0307 19:36:32.443200    4414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:36:32.443213    4414 client.go:171] duration metric: took 330.086791ms to LocalClient.Create
	I0307 19:36:34.445358    4414 start.go:128] duration metric: took 2.392405667s to createHost
	I0307 19:36:34.445458    4414 start.go:83] releasing machines lock for "force-systemd-flag-741000", held for 2.39308875s
	W0307 19:36:34.445809    4414 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-741000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-741000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:36:34.458457    4414 out.go:177] 
	W0307 19:36:34.468676    4414 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:36:34.468732    4414 out.go:239] * 
	* 
	W0307 19:36:34.471426    4414 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:36:34.483425    4414 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-741000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-741000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-741000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.748917ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-741000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-741000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-741000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-07 19:36:34.57911 -0800 PST m=+2455.904134126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-741000 -n force-systemd-flag-741000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-741000 -n force-systemd-flag-741000: exit status 7 (36.575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-741000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-741000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-741000
--- FAIL: TestForceSystemdFlag (10.16s)

                                                
                                    
x
+
TestForceSystemdEnv (10.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-390000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-390000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.980003209s)

                                                
                                                
-- stdout --
	* [force-systemd-env-390000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-390000" primary control-plane node in "force-systemd-env-390000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-390000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:36:19.516697    4381 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:36:19.516837    4381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:36:19.516841    4381 out.go:304] Setting ErrFile to fd 2...
	I0307 19:36:19.516843    4381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:36:19.516987    4381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:36:19.517999    4381 out.go:298] Setting JSON to false
	I0307 19:36:19.536233    4381 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3951,"bootTime":1709865028,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:36:19.536308    4381 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:36:19.542532    4381 out.go:177] * [force-systemd-env-390000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:36:19.558426    4381 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:36:19.550397    4381 notify.go:220] Checking for updates...
	I0307 19:36:19.569274    4381 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:36:19.582382    4381 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:36:19.595345    4381 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:36:19.608373    4381 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:36:19.622351    4381 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0307 19:36:19.629912    4381 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:36:19.629979    4381 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:36:19.636168    4381 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:36:19.645380    4381 start.go:297] selected driver: qemu2
	I0307 19:36:19.645387    4381 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:36:19.645408    4381 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:36:19.647944    4381 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:36:19.654370    4381 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:36:19.659467    4381 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 19:36:19.659507    4381 cni.go:84] Creating CNI manager for ""
	I0307 19:36:19.659517    4381 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:36:19.659521    4381 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 19:36:19.659571    4381 start.go:340] cluster config:
	{Name:force-systemd-env-390000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:36:19.664129    4381 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:36:19.669427    4381 out.go:177] * Starting "force-systemd-env-390000" primary control-plane node in "force-systemd-env-390000" cluster
	I0307 19:36:19.678288    4381 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:36:19.678303    4381 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:36:19.678339    4381 cache.go:56] Caching tarball of preloaded images
	I0307 19:36:19.678428    4381 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:36:19.678434    4381 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:36:19.678497    4381 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/force-systemd-env-390000/config.json ...
	I0307 19:36:19.678546    4381 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/force-systemd-env-390000/config.json: {Name:mk89e4ff2b6d45bb5e7c09af4c9f35d966f5bcb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:36:19.679244    4381 start.go:360] acquireMachinesLock for force-systemd-env-390000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:36:19.679278    4381 start.go:364] duration metric: took 26.458µs to acquireMachinesLock for "force-systemd-env-390000"
	I0307 19:36:19.679289    4381 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:36:19.679322    4381 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:36:19.688299    4381 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 19:36:19.704241    4381 start.go:159] libmachine.API.Create for "force-systemd-env-390000" (driver="qemu2")
	I0307 19:36:19.704262    4381 client.go:168] LocalClient.Create starting
	I0307 19:36:19.704323    4381 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:36:19.704351    4381 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:19.704363    4381 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:19.704404    4381 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:36:19.704425    4381 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:19.704432    4381 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:19.704783    4381 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:36:19.860815    4381 main.go:141] libmachine: Creating SSH key...
	I0307 19:36:19.892958    4381 main.go:141] libmachine: Creating Disk image...
	I0307 19:36:19.892965    4381 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:36:19.893154    4381 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/disk.qcow2
	I0307 19:36:19.906336    4381 main.go:141] libmachine: STDOUT: 
	I0307 19:36:19.906367    4381 main.go:141] libmachine: STDERR: 
	I0307 19:36:19.906456    4381 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/disk.qcow2 +20000M
	I0307 19:36:19.918729    4381 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:36:19.918757    4381 main.go:141] libmachine: STDERR: 
	I0307 19:36:19.918794    4381 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/disk.qcow2
	I0307 19:36:19.918798    4381 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:36:19.918834    4381 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:8e:c3:56:82:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/disk.qcow2
	I0307 19:36:19.920813    4381 main.go:141] libmachine: STDOUT: 
	I0307 19:36:19.920838    4381 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:36:19.920862    4381 client.go:171] duration metric: took 216.602666ms to LocalClient.Create
	I0307 19:36:21.923042    4381 start.go:128] duration metric: took 2.243780334s to createHost
	I0307 19:36:21.923138    4381 start.go:83] releasing machines lock for "force-systemd-env-390000", held for 2.243941292s
	W0307 19:36:21.923185    4381 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:36:21.930425    4381 out.go:177] * Deleting "force-systemd-env-390000" in qemu2 ...
	W0307 19:36:21.955148    4381 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:36:21.955186    4381 start.go:728] Will try again in 5 seconds ...
	I0307 19:36:26.957204    4381 start.go:360] acquireMachinesLock for force-systemd-env-390000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:36:26.972051    4381 start.go:364] duration metric: took 14.738833ms to acquireMachinesLock for "force-systemd-env-390000"
	I0307 19:36:26.972180    4381 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:36:26.972439    4381 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:36:26.984145    4381 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 19:36:27.033447    4381 start.go:159] libmachine.API.Create for "force-systemd-env-390000" (driver="qemu2")
	I0307 19:36:27.033491    4381 client.go:168] LocalClient.Create starting
	I0307 19:36:27.033610    4381 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:36:27.033680    4381 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:27.033699    4381 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:27.033756    4381 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:36:27.033800    4381 main.go:141] libmachine: Decoding PEM data...
	I0307 19:36:27.033813    4381 main.go:141] libmachine: Parsing certificate...
	I0307 19:36:27.034353    4381 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:36:27.184644    4381 main.go:141] libmachine: Creating SSH key...
	I0307 19:36:27.386498    4381 main.go:141] libmachine: Creating Disk image...
	I0307 19:36:27.386506    4381 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:36:27.386714    4381 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/disk.qcow2
	I0307 19:36:27.399907    4381 main.go:141] libmachine: STDOUT: 
	I0307 19:36:27.399926    4381 main.go:141] libmachine: STDERR: 
	I0307 19:36:27.399997    4381 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/disk.qcow2 +20000M
	I0307 19:36:27.411014    4381 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:36:27.411032    4381 main.go:141] libmachine: STDERR: 
	I0307 19:36:27.411045    4381 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/disk.qcow2
	I0307 19:36:27.411049    4381 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:36:27.411101    4381 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:95:11:94:92:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/force-systemd-env-390000/disk.qcow2
	I0307 19:36:27.412827    4381 main.go:141] libmachine: STDOUT: 
	I0307 19:36:27.412843    4381 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:36:27.412856    4381 client.go:171] duration metric: took 379.37325ms to LocalClient.Create
	I0307 19:36:29.414949    4381 start.go:128] duration metric: took 2.442585292s to createHost
	I0307 19:36:29.415022    4381 start.go:83] releasing machines lock for "force-systemd-env-390000", held for 2.443050667s
	W0307 19:36:29.415424    4381 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-390000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-390000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:36:29.428105    4381 out.go:177] 
	W0307 19:36:29.438352    4381 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:36:29.438386    4381 out.go:239] * 
	* 
	W0307 19:36:29.441506    4381 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:36:29.449059    4381 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-390000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-390000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-390000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (85.921084ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-390000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-390000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-390000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-07 19:36:29.552247 -0800 PST m=+2450.877066126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-390000 -n force-systemd-env-390000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-390000 -n force-systemd-env-390000: exit status 7 (41.853084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-390000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-390000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-390000
--- FAIL: TestForceSystemdEnv (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (39.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-323000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-323000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-vw2b9" [6e2db38d-1590-48c5-9265-d1f23f7bf32b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-vw2b9" [6e2db38d-1590-48c5-9265-d1f23f7bf32b] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.003599834s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:30311
functional_test.go:1657: error fetching http://192.168.105.4:30311: Get "http://192.168.105.4:30311": dial tcp 192.168.105.4:30311: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30311: Get "http://192.168.105.4:30311": dial tcp 192.168.105.4:30311: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30311: Get "http://192.168.105.4:30311": dial tcp 192.168.105.4:30311: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30311: Get "http://192.168.105.4:30311": dial tcp 192.168.105.4:30311: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30311: Get "http://192.168.105.4:30311": dial tcp 192.168.105.4:30311: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30311: Get "http://192.168.105.4:30311": dial tcp 192.168.105.4:30311: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30311: Get "http://192.168.105.4:30311": dial tcp 192.168.105.4:30311: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:30311: Get "http://192.168.105.4:30311": dial tcp 192.168.105.4:30311: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-323000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-vw2b9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-323000/192.168.105.4
Start Time:       Thu, 07 Mar 2024 19:07:47 -0800
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://a00cc681572349edcbd7a09804c5754c5de30d41661fb8c96239c73714e89b91
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 07 Mar 2024 19:08:14 -0800
Finished:     Thu, 07 Mar 2024 19:08:14 -0800
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 07 Mar 2024 19:07:59 -0800
Finished:     Thu, 07 Mar 2024 19:07:59 -0800
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t4jl7 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-t4jl7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  38s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-vw2b9 to functional-323000
Normal   Pulling    38s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     26s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 6.869s (11.421s including waiting)
Normal   Created    11s (x3 over 26s)  kubelet            Created container echoserver-arm
Normal   Started    11s (x3 over 26s)  kubelet            Started container echoserver-arm
Normal   Pulled     11s (x2 over 26s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    11s (x3 over 25s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-vw2b9_default(6e2db38d-1590-48c5-9265-d1f23f7bf32b)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-323000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-323000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.56.147
IPs:                      10.97.56.147
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30311/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-323000 -n functional-323000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| tunnel  | functional-323000 tunnel                                                                                            | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:07 PST |                     |
	|         | --alsologtostderr                                                                                                   |                   |         |         |                     |                     |
	| addons  | functional-323000 addons list                                                                                       | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:07 PST | 07 Mar 24 19:07 PST |
	| addons  | functional-323000 addons list                                                                                       | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:07 PST | 07 Mar 24 19:07 PST |
	|         | -o json                                                                                                             |                   |         |         |                     |                     |
	| service | functional-323000 service                                                                                           | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	|         | hello-node-connect --url                                                                                            |                   |         |         |                     |                     |
	| service | functional-323000 service list                                                                                      | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	| service | functional-323000 service list                                                                                      | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	|         | -o json                                                                                                             |                   |         |         |                     |                     |
	| service | functional-323000 service                                                                                           | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	|         | --namespace=default --https                                                                                         |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                    |                   |         |         |                     |                     |
	| service | functional-323000                                                                                                   | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	|         | service hello-node --url                                                                                            |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                    |                   |         |         |                     |                     |
	| service | functional-323000 service                                                                                           | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	|         | hello-node --url                                                                                                    |                   |         |         |                     |                     |
	| mount   | -p functional-323000                                                                                                | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3008229176/001:/mount-9p     |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-323000 ssh findmnt                                                                                       | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	|         | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-323000 ssh -- ls                                                                                         | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	|         | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh     | functional-323000 ssh cat                                                                                           | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	|         | /mount-9p/test-1709867295082153000                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-323000 ssh stat                                                                                          | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	|         | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-323000 ssh stat                                                                                          | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	|         | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-323000 ssh sudo                                                                                          | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	|         | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount   | -p functional-323000                                                                                                | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port559124222/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh     | functional-323000 ssh findmnt                                                                                       | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST |                     |
	|         | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-323000 ssh findmnt                                                                                       | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	|         | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-323000 ssh -- ls                                                                                         | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST | 07 Mar 24 19:08 PST |
	|         | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh     | functional-323000 ssh sudo                                                                                          | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST |                     |
	|         | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount   | -p functional-323000                                                                                                | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2791866866/001:/mount1  |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount   | -p functional-323000                                                                                                | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2791866866/001:/mount2  |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount   | -p functional-323000                                                                                                | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2791866866/001:/mount3  |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-323000 ssh findmnt                                                                                       | functional-323000 | jenkins | v1.32.0 | 07 Mar 24 19:08 PST |                     |
	|         | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 19:06:56
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 19:06:56.157182    2301 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:06:56.157322    2301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:06:56.157323    2301 out.go:304] Setting ErrFile to fd 2...
	I0307 19:06:56.157325    2301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:06:56.157450    2301 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:06:56.158514    2301 out.go:298] Setting JSON to false
	I0307 19:06:56.175147    2301 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2188,"bootTime":1709865028,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:06:56.175207    2301 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:06:56.178814    2301 out.go:177] * [functional-323000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:06:56.185760    2301 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:06:56.189712    2301 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:06:56.185829    2301 notify.go:220] Checking for updates...
	I0307 19:06:56.194675    2301 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:06:56.197710    2301 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:06:56.200728    2301 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:06:56.203712    2301 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:06:56.206936    2301 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:06:56.206973    2301 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:06:56.211715    2301 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:06:56.218678    2301 start.go:297] selected driver: qemu2
	I0307 19:06:56.218681    2301 start.go:901] validating driver "qemu2" against &{Name:functional-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:06:56.218741    2301 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:06:56.220782    2301 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:06:56.220828    2301 cni.go:84] Creating CNI manager for ""
	I0307 19:06:56.220835    2301 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:06:56.220879    2301 start.go:340] cluster config:
	{Name:functional-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-323000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:06:56.225053    2301 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:06:56.232663    2301 out.go:177] * Starting "functional-323000" primary control-plane node in "functional-323000" cluster
	I0307 19:06:56.236699    2301 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:06:56.236710    2301 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:06:56.236725    2301 cache.go:56] Caching tarball of preloaded images
	I0307 19:06:56.236779    2301 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:06:56.236783    2301 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:06:56.236860    2301 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/config.json ...
	I0307 19:06:56.237319    2301 start.go:360] acquireMachinesLock for functional-323000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:06:56.237344    2301 start.go:364] duration metric: took 22.083µs to acquireMachinesLock for "functional-323000"
	I0307 19:06:56.237350    2301 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:06:56.237355    2301 fix.go:54] fixHost starting: 
	I0307 19:06:56.238048    2301 fix.go:112] recreateIfNeeded on functional-323000: state=Running err=<nil>
	W0307 19:06:56.238054    2301 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:06:56.246683    2301 out.go:177] * Updating the running qemu2 "functional-323000" VM ...
	I0307 19:06:56.249628    2301 machine.go:94] provisionDockerMachine start ...
	I0307 19:06:56.249663    2301 main.go:141] libmachine: Using SSH client type: native
	I0307 19:06:56.249767    2301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10317da30] 0x103180290 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0307 19:06:56.249770    2301 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 19:06:56.304466    2301 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-323000
	
	I0307 19:06:56.304477    2301 buildroot.go:166] provisioning hostname "functional-323000"
	I0307 19:06:56.304525    2301 main.go:141] libmachine: Using SSH client type: native
	I0307 19:06:56.304640    2301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10317da30] 0x103180290 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0307 19:06:56.304643    2301 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-323000 && echo "functional-323000" | sudo tee /etc/hostname
	I0307 19:06:56.363873    2301 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-323000
	
	I0307 19:06:56.363914    2301 main.go:141] libmachine: Using SSH client type: native
	I0307 19:06:56.364022    2301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10317da30] 0x103180290 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0307 19:06:56.364028    2301 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-323000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-323000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-323000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 19:06:56.417237    2301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 19:06:56.417244    2301 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18333-1199/.minikube CaCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18333-1199/.minikube}
	I0307 19:06:56.417251    2301 buildroot.go:174] setting up certificates
	I0307 19:06:56.417254    2301 provision.go:84] configureAuth start
	I0307 19:06:56.417257    2301 provision.go:143] copyHostCerts
	I0307 19:06:56.417329    2301 exec_runner.go:144] found /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.pem, removing ...
	I0307 19:06:56.417334    2301 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.pem
	I0307 19:06:56.417464    2301 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.pem (1082 bytes)
	I0307 19:06:56.417646    2301 exec_runner.go:144] found /Users/jenkins/minikube-integration/18333-1199/.minikube/cert.pem, removing ...
	I0307 19:06:56.417647    2301 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18333-1199/.minikube/cert.pem
	I0307 19:06:56.417699    2301 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/cert.pem (1123 bytes)
	I0307 19:06:56.417819    2301 exec_runner.go:144] found /Users/jenkins/minikube-integration/18333-1199/.minikube/key.pem, removing ...
	I0307 19:06:56.417820    2301 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18333-1199/.minikube/key.pem
	I0307 19:06:56.417860    2301 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/key.pem (1675 bytes)
	I0307 19:06:56.417946    2301 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem org=jenkins.functional-323000 san=[127.0.0.1 192.168.105.4 functional-323000 localhost minikube]
	I0307 19:06:56.484932    2301 provision.go:177] copyRemoteCerts
	I0307 19:06:56.484959    2301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 19:06:56.484965    2301 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/functional-323000/id_rsa Username:docker}
	I0307 19:06:56.515333    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0307 19:06:56.523726    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 19:06:56.532445    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 19:06:56.540593    2301 provision.go:87] duration metric: took 123.335792ms to configureAuth
	I0307 19:06:56.540599    2301 buildroot.go:189] setting minikube options for container-runtime
	I0307 19:06:56.540707    2301 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:06:56.540738    2301 main.go:141] libmachine: Using SSH client type: native
	I0307 19:06:56.540820    2301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10317da30] 0x103180290 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0307 19:06:56.540823    2301 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 19:06:56.598937    2301 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 19:06:56.598943    2301 buildroot.go:70] root file system type: tmpfs
	I0307 19:06:56.598991    2301 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 19:06:56.599041    2301 main.go:141] libmachine: Using SSH client type: native
	I0307 19:06:56.599141    2301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10317da30] 0x103180290 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0307 19:06:56.599172    2301 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 19:06:56.657222    2301 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 19:06:56.657275    2301 main.go:141] libmachine: Using SSH client type: native
	I0307 19:06:56.657380    2301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10317da30] 0x103180290 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0307 19:06:56.657386    2301 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 19:06:56.713609    2301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 19:06:56.713617    2301 machine.go:97] duration metric: took 463.998417ms to provisionDockerMachine
	I0307 19:06:56.713621    2301 start.go:293] postStartSetup for "functional-323000" (driver="qemu2")
	I0307 19:06:56.713627    2301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 19:06:56.713677    2301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 19:06:56.713692    2301 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/functional-323000/id_rsa Username:docker}
	I0307 19:06:56.743258    2301 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 19:06:56.744837    2301 info.go:137] Remote host: Buildroot 2023.02.9
	I0307 19:06:56.744843    2301 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18333-1199/.minikube/addons for local assets ...
	I0307 19:06:56.744910    2301 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18333-1199/.minikube/files for local assets ...
	I0307 19:06:56.745023    2301 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem -> 16202.pem in /etc/ssl/certs
	I0307 19:06:56.745131    2301 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/test/nested/copy/1620/hosts -> hosts in /etc/test/nested/copy/1620
	I0307 19:06:56.745160    2301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1620
	I0307 19:06:56.748397    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem --> /etc/ssl/certs/16202.pem (1708 bytes)
	I0307 19:06:56.756357    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/test/nested/copy/1620/hosts --> /etc/test/nested/copy/1620/hosts (40 bytes)
	I0307 19:06:56.764598    2301 start.go:296] duration metric: took 50.973708ms for postStartSetup
	I0307 19:06:56.764610    2301 fix.go:56] duration metric: took 527.271ms for fixHost
	I0307 19:06:56.764645    2301 main.go:141] libmachine: Using SSH client type: native
	I0307 19:06:56.764743    2301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10317da30] 0x103180290 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0307 19:06:56.764746    2301 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 19:06:56.817946    2301 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709867216.842870440
	
	I0307 19:06:56.817952    2301 fix.go:216] guest clock: 1709867216.842870440
	I0307 19:06:56.817955    2301 fix.go:229] Guest: 2024-03-07 19:06:56.84287044 -0800 PST Remote: 2024-03-07 19:06:56.764611 -0800 PST m=+0.629941460 (delta=78.25944ms)
	I0307 19:06:56.817965    2301 fix.go:200] guest clock delta is within tolerance: 78.25944ms
	I0307 19:06:56.817967    2301 start.go:83] releasing machines lock for "functional-323000", held for 580.636292ms
	I0307 19:06:56.818358    2301 ssh_runner.go:195] Run: cat /version.json
	I0307 19:06:56.818364    2301 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/functional-323000/id_rsa Username:docker}
	I0307 19:06:56.818371    2301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 19:06:56.818385    2301 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/functional-323000/id_rsa Username:docker}
	I0307 19:06:56.849078    2301 ssh_runner.go:195] Run: systemctl --version
	I0307 19:06:56.892496    2301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 19:06:56.894517    2301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 19:06:56.894542    2301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 19:06:56.897890    2301 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0307 19:06:56.897895    2301 start.go:494] detecting cgroup driver to use...
	I0307 19:06:56.897964    2301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 19:06:56.904330    2301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 19:06:56.908030    2301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 19:06:56.911948    2301 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 19:06:56.911971    2301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 19:06:56.915837    2301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 19:06:56.919707    2301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 19:06:56.923630    2301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 19:06:56.927607    2301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 19:06:56.931239    2301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 19:06:56.935419    2301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 19:06:56.939433    2301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 19:06:56.943433    2301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:06:57.032516    2301 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 19:06:57.042901    2301 start.go:494] detecting cgroup driver to use...
	I0307 19:06:57.042962    2301 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 19:06:57.049027    2301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 19:06:57.054782    2301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 19:06:57.063764    2301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 19:06:57.069265    2301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 19:06:57.074577    2301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 19:06:57.081394    2301 ssh_runner.go:195] Run: which cri-dockerd
	I0307 19:06:57.082762    2301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 19:06:57.086328    2301 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 19:06:57.092216    2301 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 19:06:57.187249    2301 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 19:06:57.292626    2301 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 19:06:57.292678    2301 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 19:06:57.299167    2301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:06:57.387504    2301 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 19:07:08.651225    2301 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.264028958s)
	I0307 19:07:08.651283    2301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 19:07:08.657149    2301 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 19:07:08.664557    2301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 19:07:08.669992    2301 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 19:07:08.754320    2301 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 19:07:08.843499    2301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:07:08.929907    2301 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 19:07:08.937148    2301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 19:07:08.942106    2301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:07:09.026994    2301 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 19:07:09.055263    2301 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 19:07:09.055320    2301 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 19:07:09.057762    2301 start.go:562] Will wait 60s for crictl version
	I0307 19:07:09.057789    2301 ssh_runner.go:195] Run: which crictl
	I0307 19:07:09.059329    2301 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 19:07:09.076482    2301 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0307 19:07:09.076559    2301 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 19:07:09.084133    2301 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 19:07:09.096201    2301 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0307 19:07:09.096328    2301 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0307 19:07:09.102087    2301 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0307 19:07:09.106099    2301 kubeadm.go:877] updating cluster {Name:functional-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:functional-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 19:07:09.106150    2301 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:07:09.106186    2301 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 19:07:09.112184    2301 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-323000
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0307 19:07:09.112191    2301 docker.go:615] Images already preloaded, skipping extraction
	I0307 19:07:09.112236    2301 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 19:07:09.117574    2301 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-323000
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0307 19:07:09.117579    2301 cache_images.go:84] Images are preloaded, skipping loading
	I0307 19:07:09.117583    2301 kubeadm.go:928] updating node { 192.168.105.4 8441 v1.28.4 docker true true} ...
	I0307 19:07:09.117639    2301 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-323000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 19:07:09.117693    2301 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 19:07:09.125137    2301 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0307 19:07:09.125193    2301 cni.go:84] Creating CNI manager for ""
	I0307 19:07:09.125199    2301 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:07:09.125203    2301 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 19:07:09.125211    2301 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-323000 NodeName:functional-323000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 19:07:09.125292    2301 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-323000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 19:07:09.125356    2301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 19:07:09.129418    2301 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 19:07:09.129445    2301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 19:07:09.133064    2301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0307 19:07:09.138998    2301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 19:07:09.145129    2301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0307 19:07:09.151066    2301 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0307 19:07:09.152655    2301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:07:09.243829    2301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 19:07:09.249525    2301 certs.go:68] Setting up /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000 for IP: 192.168.105.4
	I0307 19:07:09.249529    2301 certs.go:194] generating shared ca certs ...
	I0307 19:07:09.249536    2301 certs.go:226] acquiring lock for ca certs: {Name:mkeed6c4d5ba27d3ef2bc04c52c43819ca546cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:07:09.249698    2301 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.key
	I0307 19:07:09.249750    2301 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.key
	I0307 19:07:09.249756    2301 certs.go:256] generating profile certs ...
	I0307 19:07:09.249817    2301 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.key
	I0307 19:07:09.249864    2301 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/apiserver.key.eece9718
	I0307 19:07:09.249908    2301 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/proxy-client.key
	I0307 19:07:09.250045    2301 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/1620.pem (1338 bytes)
	W0307 19:07:09.250072    2301 certs.go:480] ignoring /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/1620_empty.pem, impossibly tiny 0 bytes
	I0307 19:07:09.250076    2301 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 19:07:09.250095    2301 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem (1082 bytes)
	I0307 19:07:09.250122    2301 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem (1123 bytes)
	I0307 19:07:09.250135    2301 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem (1675 bytes)
	I0307 19:07:09.250167    2301 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem (1708 bytes)
	I0307 19:07:09.250475    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 19:07:09.259426    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 19:07:09.267824    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 19:07:09.276282    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 19:07:09.284090    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0307 19:07:09.292249    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0307 19:07:09.300830    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 19:07:09.308928    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 19:07:09.317751    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/1620.pem --> /usr/share/ca-certificates/1620.pem (1338 bytes)
	I0307 19:07:09.325917    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem --> /usr/share/ca-certificates/16202.pem (1708 bytes)
	I0307 19:07:09.334397    2301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 19:07:09.343040    2301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 19:07:09.349172    2301 ssh_runner.go:195] Run: openssl version
	I0307 19:07:09.351227    2301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16202.pem && ln -fs /usr/share/ca-certificates/16202.pem /etc/ssl/certs/16202.pem"
	I0307 19:07:09.355118    2301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16202.pem
	I0307 19:07:09.357030    2301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:04 /usr/share/ca-certificates/16202.pem
	I0307 19:07:09.357049    2301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16202.pem
	I0307 19:07:09.359094    2301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16202.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 19:07:09.362804    2301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 19:07:09.366931    2301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:07:09.368775    2301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:57 /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:07:09.368790    2301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:07:09.370979    2301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 19:07:09.374676    2301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1620.pem && ln -fs /usr/share/ca-certificates/1620.pem /etc/ssl/certs/1620.pem"
	I0307 19:07:09.378796    2301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1620.pem
	I0307 19:07:09.380422    2301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:04 /usr/share/ca-certificates/1620.pem
	I0307 19:07:09.380436    2301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1620.pem
	I0307 19:07:09.382579    2301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1620.pem /etc/ssl/certs/51391683.0"
	I0307 19:07:09.386505    2301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 19:07:09.388270    2301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 19:07:09.390415    2301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 19:07:09.392523    2301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 19:07:09.394787    2301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 19:07:09.396879    2301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 19:07:09.398896    2301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 19:07:09.401066    2301 kubeadm.go:391] StartCluster: {Name:functional-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:functional-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:07:09.401141    2301 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 19:07:09.407139    2301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 19:07:09.411118    2301 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 19:07:09.411121    2301 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 19:07:09.411123    2301 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 19:07:09.411147    2301 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 19:07:09.414722    2301 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 19:07:09.415019    2301 kubeconfig.go:125] found "functional-323000" server: "https://192.168.105.4:8441"
	I0307 19:07:09.415615    2301 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 19:07:09.419166    2301 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0307 19:07:09.419169    2301 kubeadm.go:1153] stopping kube-system containers ...
	I0307 19:07:09.419205    2301 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 19:07:09.426686    2301 docker.go:483] Stopping containers: [277719ce70b6 4be87bbd9d27 000d2428c471 ea27419905d3 f147948201d5 bed30db45c29 eeb0fab0784a 199dbcd67f5e 0756aa4b4ff5 301da11aa479 d336c5fdc128 f65426e6124c 3bde797934c5 6c34aafee4d7 598fbb84ef29 580b4ffd4aa6 f4145e9d8293 8d301fe3265d 148c2773067e fd0286e8a756 ddd7d500af98 db22881bc919 50da85f6a926 bfdce1b90814 49c5f55fa851 8ecdfd137f0c ccb524bbd396 5c07a5618bac 7aee6a46c0da]
	I0307 19:07:09.426740    2301 ssh_runner.go:195] Run: docker stop 277719ce70b6 4be87bbd9d27 000d2428c471 ea27419905d3 f147948201d5 bed30db45c29 eeb0fab0784a 199dbcd67f5e 0756aa4b4ff5 301da11aa479 d336c5fdc128 f65426e6124c 3bde797934c5 6c34aafee4d7 598fbb84ef29 580b4ffd4aa6 f4145e9d8293 8d301fe3265d 148c2773067e fd0286e8a756 ddd7d500af98 db22881bc919 50da85f6a926 bfdce1b90814 49c5f55fa851 8ecdfd137f0c ccb524bbd396 5c07a5618bac 7aee6a46c0da
	I0307 19:07:09.433491    2301 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 19:07:09.525890    2301 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 19:07:09.530809    2301 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Mar  8 03:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Mar  8 03:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Mar  8 03:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Mar  8 03:06 /etc/kubernetes/scheduler.conf
	
	I0307 19:07:09.530843    2301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0307 19:07:09.534929    2301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0307 19:07:09.538803    2301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0307 19:07:09.542619    2301 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 19:07:09.542639    2301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 19:07:09.546192    2301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0307 19:07:09.549738    2301 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 19:07:09.549762    2301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 19:07:09.553254    2301 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 19:07:09.556524    2301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:07:09.578819    2301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:07:09.943003    2301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:07:10.059795    2301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:07:10.084516    2301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:07:10.109004    2301 api_server.go:52] waiting for apiserver process to appear ...
	I0307 19:07:10.109077    2301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:07:10.611141    2301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:07:11.111114    2301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:07:11.118830    2301 api_server.go:72] duration metric: took 1.009853959s to wait for apiserver process to appear ...
	I0307 19:07:11.118838    2301 api_server.go:88] waiting for apiserver healthz status ...
	I0307 19:07:11.118847    2301 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0307 19:07:12.422157    2301 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0307 19:07:12.422164    2301 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0307 19:07:12.422171    2301 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0307 19:07:12.427038    2301 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0307 19:07:12.427043    2301 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0307 19:07:12.620855    2301 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0307 19:07:12.623997    2301 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0307 19:07:12.624001    2301 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0307 19:07:13.120844    2301 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0307 19:07:13.123963    2301 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0307 19:07:13.123970    2301 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0307 19:07:13.620883    2301 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0307 19:07:13.623853    2301 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0307 19:07:13.628240    2301 api_server.go:141] control plane version: v1.28.4
	I0307 19:07:13.628247    2301 api_server.go:131] duration metric: took 2.509477583s to wait for apiserver health ...
	I0307 19:07:13.628251    2301 cni.go:84] Creating CNI manager for ""
	I0307 19:07:13.628256    2301 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:07:13.683155    2301 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 19:07:13.684618    2301 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 19:07:13.690034    2301 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 19:07:13.698248    2301 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 19:07:13.703198    2301 system_pods.go:59] 7 kube-system pods found
	I0307 19:07:13.703209    2301 system_pods.go:61] "coredns-5dd5756b68-726ql" [867557c0-edb9-4e61-9df8-e193c9b09680] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0307 19:07:13.703213    2301 system_pods.go:61] "etcd-functional-323000" [9ca6f1eb-6ad5-4f47-a09a-0e399434ff9e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0307 19:07:13.703216    2301 system_pods.go:61] "kube-apiserver-functional-323000" [7834ddc4-61fe-444e-b91d-d9ed8cc514c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0307 19:07:13.703218    2301 system_pods.go:61] "kube-controller-manager-functional-323000" [0c5017d3-087e-4ef0-8ec5-d309ffaa7e6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0307 19:07:13.703221    2301 system_pods.go:61] "kube-proxy-lpkpc" [aa670d62-49fc-48f8-bdbd-850b1ee117a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0307 19:07:13.703223    2301 system_pods.go:61] "kube-scheduler-functional-323000" [d4f26210-0cc7-448c-88f6-5ebb89331262] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0307 19:07:13.703225    2301 system_pods.go:61] "storage-provisioner" [9debf6ba-7574-48c4-8249-f238fb1a8a0a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0307 19:07:13.703234    2301 system_pods.go:74] duration metric: took 4.9805ms to wait for pod list to return data ...
	I0307 19:07:13.703238    2301 node_conditions.go:102] verifying NodePressure condition ...
	I0307 19:07:13.705526    2301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 19:07:13.705533    2301 node_conditions.go:123] node cpu capacity is 2
	I0307 19:07:13.705538    2301 node_conditions.go:105] duration metric: took 2.298166ms to run NodePressure ...
	I0307 19:07:13.705545    2301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:07:13.786514    2301 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0307 19:07:13.788679    2301 kubeadm.go:733] kubelet initialised
	I0307 19:07:13.788682    2301 kubeadm.go:734] duration metric: took 2.162042ms waiting for restarted kubelet to initialise ...
	I0307 19:07:13.788686    2301 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 19:07:13.791622    2301 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-726ql" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:15.796806    2301 pod_ready.go:102] pod "coredns-5dd5756b68-726ql" in "kube-system" namespace has status "Ready":"False"
	I0307 19:07:17.797078    2301 pod_ready.go:102] pod "coredns-5dd5756b68-726ql" in "kube-system" namespace has status "Ready":"False"
	I0307 19:07:20.296756    2301 pod_ready.go:102] pod "coredns-5dd5756b68-726ql" in "kube-system" namespace has status "Ready":"False"
	I0307 19:07:22.297062    2301 pod_ready.go:102] pod "coredns-5dd5756b68-726ql" in "kube-system" namespace has status "Ready":"False"
	I0307 19:07:23.296094    2301 pod_ready.go:92] pod "coredns-5dd5756b68-726ql" in "kube-system" namespace has status "Ready":"True"
	I0307 19:07:23.296099    2301 pod_ready.go:81] duration metric: took 9.504743292s for pod "coredns-5dd5756b68-726ql" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:23.296104    2301 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:25.301220    2301 pod_ready.go:102] pod "etcd-functional-323000" in "kube-system" namespace has status "Ready":"False"
	I0307 19:07:27.800824    2301 pod_ready.go:102] pod "etcd-functional-323000" in "kube-system" namespace has status "Ready":"False"
	I0307 19:07:28.302437    2301 pod_ready.go:92] pod "etcd-functional-323000" in "kube-system" namespace has status "Ready":"True"
	I0307 19:07:28.302443    2301 pod_ready.go:81] duration metric: took 5.006478833s for pod "etcd-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:28.302447    2301 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:28.305947    2301 pod_ready.go:92] pod "kube-apiserver-functional-323000" in "kube-system" namespace has status "Ready":"True"
	I0307 19:07:28.305952    2301 pod_ready.go:81] duration metric: took 3.502416ms for pod "kube-apiserver-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:28.305955    2301 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:28.309160    2301 pod_ready.go:92] pod "kube-controller-manager-functional-323000" in "kube-system" namespace has status "Ready":"True"
	I0307 19:07:28.309163    2301 pod_ready.go:81] duration metric: took 3.205875ms for pod "kube-controller-manager-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:28.309166    2301 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lpkpc" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:28.311412    2301 pod_ready.go:92] pod "kube-proxy-lpkpc" in "kube-system" namespace has status "Ready":"True"
	I0307 19:07:28.311415    2301 pod_ready.go:81] duration metric: took 2.247ms for pod "kube-proxy-lpkpc" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:28.311418    2301 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:28.313584    2301 pod_ready.go:92] pod "kube-scheduler-functional-323000" in "kube-system" namespace has status "Ready":"True"
	I0307 19:07:28.313586    2301 pod_ready.go:81] duration metric: took 2.16625ms for pod "kube-scheduler-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:28.313589    2301 pod_ready.go:38] duration metric: took 14.525313417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 19:07:28.313602    2301 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 19:07:28.317529    2301 ops.go:34] apiserver oom_adj: -16
	I0307 19:07:28.317532    2301 kubeadm.go:591] duration metric: took 18.906944875s to restartPrimaryControlPlane
	I0307 19:07:28.317535    2301 kubeadm.go:393] duration metric: took 18.917008584s to StartCluster
	I0307 19:07:28.317542    2301 settings.go:142] acquiring lock: {Name:mka91134012bc21ec54a241fdaa124189f2aed4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:07:28.317611    2301 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:07:28.318195    2301 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/kubeconfig: {Name:mk4dcca67acc40e2ef9a6fcc3838689fa74c4a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:07:28.318413    2301 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:07:28.322715    2301 out.go:177] * Verifying Kubernetes components...
	I0307 19:07:28.318606    2301 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:07:28.318620    2301 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 19:07:28.329633    2301 addons.go:69] Setting storage-provisioner=true in profile "functional-323000"
	I0307 19:07:28.329647    2301 addons.go:234] Setting addon storage-provisioner=true in "functional-323000"
	I0307 19:07:28.329700    2301 addons.go:69] Setting default-storageclass=true in profile "functional-323000"
	I0307 19:07:28.329805    2301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0307 19:07:28.329808    2301 addons.go:243] addon storage-provisioner should already be in state true
	I0307 19:07:28.329827    2301 host.go:66] Checking if "functional-323000" exists ...
	I0307 19:07:28.329849    2301 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-323000"
	I0307 19:07:28.330885    2301 addons.go:234] Setting addon default-storageclass=true in "functional-323000"
	W0307 19:07:28.330887    2301 addons.go:243] addon default-storageclass should already be in state true
	I0307 19:07:28.330893    2301 host.go:66] Checking if "functional-323000" exists ...
	I0307 19:07:28.335708    2301 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:07:28.334784    2301 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 19:07:28.339663    2301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 19:07:28.339670    2301 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/functional-323000/id_rsa Username:docker}
	I0307 19:07:28.339675    2301 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:07:28.339677    2301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 19:07:28.339680    2301 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/functional-323000/id_rsa Username:docker}
	I0307 19:07:28.434364    2301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 19:07:28.440251    2301 node_ready.go:35] waiting up to 6m0s for node "functional-323000" to be "Ready" ...
	I0307 19:07:28.501597    2301 node_ready.go:49] node "functional-323000" has status "Ready":"True"
	I0307 19:07:28.501608    2301 node_ready.go:38] duration metric: took 61.344166ms for node "functional-323000" to be "Ready" ...
	I0307 19:07:28.501611    2301 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 19:07:28.510367    2301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:07:28.512131    2301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 19:07:28.706338    2301 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-726ql" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:28.872909    2301 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0307 19:07:28.876911    2301 addons.go:505] duration metric: took 558.31ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0307 19:07:29.101838    2301 pod_ready.go:92] pod "coredns-5dd5756b68-726ql" in "kube-system" namespace has status "Ready":"True"
	I0307 19:07:29.101845    2301 pod_ready.go:81] duration metric: took 395.511333ms for pod "coredns-5dd5756b68-726ql" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:29.101849    2301 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:29.501918    2301 pod_ready.go:92] pod "etcd-functional-323000" in "kube-system" namespace has status "Ready":"True"
	I0307 19:07:29.501923    2301 pod_ready.go:81] duration metric: took 400.082958ms for pod "etcd-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:29.501927    2301 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:29.901505    2301 pod_ready.go:92] pod "kube-apiserver-functional-323000" in "kube-system" namespace has status "Ready":"True"
	I0307 19:07:29.901511    2301 pod_ready.go:81] duration metric: took 399.593041ms for pod "kube-apiserver-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:29.901515    2301 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:30.301772    2301 pod_ready.go:92] pod "kube-controller-manager-functional-323000" in "kube-system" namespace has status "Ready":"True"
	I0307 19:07:30.301778    2301 pod_ready.go:81] duration metric: took 400.271709ms for pod "kube-controller-manager-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:30.301782    2301 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lpkpc" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:30.701436    2301 pod_ready.go:92] pod "kube-proxy-lpkpc" in "kube-system" namespace has status "Ready":"True"
	I0307 19:07:30.701442    2301 pod_ready.go:81] duration metric: took 399.669208ms for pod "kube-proxy-lpkpc" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:30.701446    2301 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:31.101356    2301 pod_ready.go:92] pod "kube-scheduler-functional-323000" in "kube-system" namespace has status "Ready":"True"
	I0307 19:07:31.101361    2301 pod_ready.go:81] duration metric: took 399.924291ms for pod "kube-scheduler-functional-323000" in "kube-system" namespace to be "Ready" ...
	I0307 19:07:31.101365    2301 pod_ready.go:38] duration metric: took 2.599823417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 19:07:31.101375    2301 api_server.go:52] waiting for apiserver process to appear ...
	I0307 19:07:31.101458    2301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:07:31.107245    2301 api_server.go:72] duration metric: took 2.788898125s to wait for apiserver process to appear ...
	I0307 19:07:31.107250    2301 api_server.go:88] waiting for apiserver healthz status ...
	I0307 19:07:31.107257    2301 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0307 19:07:31.110111    2301 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0307 19:07:31.110841    2301 api_server.go:141] control plane version: v1.28.4
	I0307 19:07:31.110844    2301 api_server.go:131] duration metric: took 3.592375ms to wait for apiserver health ...
	I0307 19:07:31.110846    2301 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 19:07:31.303471    2301 system_pods.go:59] 7 kube-system pods found
	I0307 19:07:31.303478    2301 system_pods.go:61] "coredns-5dd5756b68-726ql" [867557c0-edb9-4e61-9df8-e193c9b09680] Running
	I0307 19:07:31.303480    2301 system_pods.go:61] "etcd-functional-323000" [9ca6f1eb-6ad5-4f47-a09a-0e399434ff9e] Running
	I0307 19:07:31.303482    2301 system_pods.go:61] "kube-apiserver-functional-323000" [dc3eda4b-18a3-4a91-a4fe-822286870c0f] Running
	I0307 19:07:31.303483    2301 system_pods.go:61] "kube-controller-manager-functional-323000" [0c5017d3-087e-4ef0-8ec5-d309ffaa7e6b] Running
	I0307 19:07:31.303485    2301 system_pods.go:61] "kube-proxy-lpkpc" [aa670d62-49fc-48f8-bdbd-850b1ee117a8] Running
	I0307 19:07:31.303486    2301 system_pods.go:61] "kube-scheduler-functional-323000" [d4f26210-0cc7-448c-88f6-5ebb89331262] Running
	I0307 19:07:31.303487    2301 system_pods.go:61] "storage-provisioner" [9debf6ba-7574-48c4-8249-f238fb1a8a0a] Running
	I0307 19:07:31.303489    2301 system_pods.go:74] duration metric: took 192.6465ms to wait for pod list to return data ...
	I0307 19:07:31.303492    2301 default_sa.go:34] waiting for default service account to be created ...
	I0307 19:07:31.500284    2301 default_sa.go:45] found service account: "default"
	I0307 19:07:31.500290    2301 default_sa.go:55] duration metric: took 196.801667ms for default service account to be created ...
	I0307 19:07:31.500294    2301 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 19:07:31.703229    2301 system_pods.go:86] 7 kube-system pods found
	I0307 19:07:31.703237    2301 system_pods.go:89] "coredns-5dd5756b68-726ql" [867557c0-edb9-4e61-9df8-e193c9b09680] Running
	I0307 19:07:31.703240    2301 system_pods.go:89] "etcd-functional-323000" [9ca6f1eb-6ad5-4f47-a09a-0e399434ff9e] Running
	I0307 19:07:31.703241    2301 system_pods.go:89] "kube-apiserver-functional-323000" [dc3eda4b-18a3-4a91-a4fe-822286870c0f] Running
	I0307 19:07:31.703243    2301 system_pods.go:89] "kube-controller-manager-functional-323000" [0c5017d3-087e-4ef0-8ec5-d309ffaa7e6b] Running
	I0307 19:07:31.703245    2301 system_pods.go:89] "kube-proxy-lpkpc" [aa670d62-49fc-48f8-bdbd-850b1ee117a8] Running
	I0307 19:07:31.703246    2301 system_pods.go:89] "kube-scheduler-functional-323000" [d4f26210-0cc7-448c-88f6-5ebb89331262] Running
	I0307 19:07:31.703247    2301 system_pods.go:89] "storage-provisioner" [9debf6ba-7574-48c4-8249-f238fb1a8a0a] Running
	I0307 19:07:31.703250    2301 system_pods.go:126] duration metric: took 202.959625ms to wait for k8s-apps to be running ...
	I0307 19:07:31.703252    2301 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 19:07:31.703325    2301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 19:07:31.709340    2301 system_svc.go:56] duration metric: took 6.084375ms WaitForService to wait for kubelet
	I0307 19:07:31.709346    2301 kubeadm.go:576] duration metric: took 3.391017708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:07:31.709356    2301 node_conditions.go:102] verifying NodePressure condition ...
	I0307 19:07:31.901303    2301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 19:07:31.901308    2301 node_conditions.go:123] node cpu capacity is 2
	I0307 19:07:31.901313    2301 node_conditions.go:105] duration metric: took 191.960583ms to run NodePressure ...
	I0307 19:07:31.901319    2301 start.go:240] waiting for startup goroutines ...
	I0307 19:07:31.901323    2301 start.go:245] waiting for cluster config update ...
	I0307 19:07:31.901328    2301 start.go:254] writing updated cluster config ...
	I0307 19:07:31.901746    2301 ssh_runner.go:195] Run: rm -f paused
	I0307 19:07:31.932000    2301 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0307 19:07:31.935930    2301 out.go:177] * Done! kubectl is now configured to use "functional-323000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 08 03:08:16 functional-323000 dockerd[7550]: time="2024-03-08T03:08:16.736500893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 03:08:16 functional-323000 dockerd[7550]: time="2024-03-08T03:08:16.736508102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 03:08:16 functional-323000 dockerd[7550]: time="2024-03-08T03:08:16.736579604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 03:08:16 functional-323000 cri-dockerd[7745]: time="2024-03-08T03:08:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89f8b8dd8f4689c003d226e40302f3abd4fa85d4889b228be385f814df3d53cf/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.176433978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.176640608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.176654442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.176758778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 03:08:22 functional-323000 dockerd[7542]: time="2024-03-08T03:08:22.203064788Z" level=info msg="ignoring event" container=da3b296944ead8237358857516dbcef4ac3bbb642196521777eddfdc792e1e88 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.203132790Z" level=info msg="shim disconnected" id=da3b296944ead8237358857516dbcef4ac3bbb642196521777eddfdc792e1e88 namespace=moby
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.203160124Z" level=warning msg="cleaning up after shim disconnected" id=da3b296944ead8237358857516dbcef4ac3bbb642196521777eddfdc792e1e88 namespace=moby
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.203195417Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 08 03:08:22 functional-323000 cri-dockerd[7745]: time="2024-03-08T03:08:22Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.298597938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.298629689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.298647981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.298677774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.333749523Z" level=info msg="shim disconnected" id=b4187cc3444f8de15749c55e2e01843156694b18fc9f96bdae5e85503c0e9e4b namespace=moby
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.333848484Z" level=warning msg="cleaning up after shim disconnected" id=b4187cc3444f8de15749c55e2e01843156694b18fc9f96bdae5e85503c0e9e4b namespace=moby
	Mar 08 03:08:22 functional-323000 dockerd[7542]: time="2024-03-08T03:08:22.333831150Z" level=info msg="ignoring event" container=b4187cc3444f8de15749c55e2e01843156694b18fc9f96bdae5e85503c0e9e4b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 08 03:08:22 functional-323000 dockerd[7550]: time="2024-03-08T03:08:22.333890694Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 08 03:08:23 functional-323000 dockerd[7542]: time="2024-03-08T03:08:23.714922378Z" level=info msg="ignoring event" container=89f8b8dd8f4689c003d226e40302f3abd4fa85d4889b228be385f814df3d53cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 08 03:08:23 functional-323000 dockerd[7550]: time="2024-03-08T03:08:23.715119675Z" level=info msg="shim disconnected" id=89f8b8dd8f4689c003d226e40302f3abd4fa85d4889b228be385f814df3d53cf namespace=moby
	Mar 08 03:08:23 functional-323000 dockerd[7550]: time="2024-03-08T03:08:23.715201677Z" level=warning msg="cleaning up after shim disconnected" id=89f8b8dd8f4689c003d226e40302f3abd4fa85d4889b228be385f814df3d53cf namespace=moby
	Mar 08 03:08:23 functional-323000 dockerd[7550]: time="2024-03-08T03:08:23.715218261Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b4187cc3444f8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 seconds ago        Exited              mount-munger              0                   89f8b8dd8f468       busybox-mount
	da3b296944ead       72565bf5bbedf                                                                                         4 seconds ago        Exited              echoserver-arm            2                   0974e7293ddea       hello-node-759d89bdcc-bn22b
	a00cc68157234       72565bf5bbedf                                                                                         12 seconds ago       Exited              echoserver-arm            2                   2c472a8072a57       hello-node-connect-7799dfb7c6-vw2b9
	db54d0fd753da       nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107                         26 seconds ago       Running             myfrontend                0                   55fafdd153841       sp-pod
	e32a55c5bcb90       nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                         45 seconds ago       Running             nginx                     0                   62a42c194bd1c       nginx-svc
	5392536c13147       ba04bb24b9575                                                                                         58 seconds ago       Running             storage-provisioner       3                   425bcea9a706a       storage-provisioner
	8f9ef7fb44b15       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   2f36d16777750       coredns-5dd5756b68-726ql
	028bf7da99266       3ca3ca488cf13                                                                                         About a minute ago   Running             kube-proxy                2                   60243c2ca60a8       kube-proxy-lpkpc
	1eedf39f3e396       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   425bcea9a706a       storage-provisioner
	47ca6ef0fdb82       9cdd6470f48c8                                                                                         About a minute ago   Running             etcd                      2                   8ecab5f134682       etcd-functional-323000
	08af161f5dff9       05c284c929889                                                                                         About a minute ago   Running             kube-scheduler            2                   0f8fc56d3a44b       kube-scheduler-functional-323000
	844908b20b999       9961cbceaf234                                                                                         About a minute ago   Running             kube-controller-manager   2                   4a87da67c0032       kube-controller-manager-functional-323000
	d52c3056e1a5f       04b4c447bb9d4                                                                                         About a minute ago   Running             kube-apiserver            0                   deadd3b9fab2e       kube-apiserver-functional-323000
	277719ce70b66       97e04611ad434                                                                                         2 minutes ago        Exited              coredns                   1                   ea27419905d33       coredns-5dd5756b68-726ql
	4be87bbd9d276       3ca3ca488cf13                                                                                         2 minutes ago        Exited              kube-proxy                1                   bed30db45c298       kube-proxy-lpkpc
	eeb0fab0784a6       9961cbceaf234                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   f65426e6124c1       kube-controller-manager-functional-323000
	199dbcd67f5e1       05c284c929889                                                                                         2 minutes ago        Exited              kube-scheduler            1                   d336c5fdc1285       kube-scheduler-functional-323000
	0756aa4b4ff59       9cdd6470f48c8                                                                                         2 minutes ago        Exited              etcd                      1                   6c34aafee4d76       etcd-functional-323000
	
	
	==> coredns [277719ce70b6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48034 - 61331 "HINFO IN 1673014368247606368.2182916612831547689. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004150315s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8f9ef7fb44b1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49097 - 21392 "HINFO IN 8360388705090151890.6098564624146527901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004408355s
	[INFO] 10.244.0.1:41277 - 23462 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000097045s
	[INFO] 10.244.0.1:59337 - 53259 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000091128s
	[INFO] 10.244.0.1:55977 - 15805 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.00003025s
	[INFO] 10.244.0.1:30351 - 56417 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.00099624s
	[INFO] 10.244.0.1:50209 - 16739 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000057835s
	[INFO] 10.244.0.1:20790 - 47626 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000053752s
	
	
	==> describe nodes <==
	Name:               functional-323000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-323000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=functional-323000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T19_05_19_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:05:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-323000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:08:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:08:14 +0000   Fri, 08 Mar 2024 03:05:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:08:14 +0000   Fri, 08 Mar 2024 03:05:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:08:14 +0000   Fri, 08 Mar 2024 03:05:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:08:14 +0000   Fri, 08 Mar 2024 03:05:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-323000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 d44d7fa3dc0d4eae93cf0e2b713c55e1
	  System UUID:                d44d7fa3dc0d4eae93cf0e2b713c55e1
	  Boot ID:                    1d2d2ab6-362b-43d8-83e0-99645d09ee0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-bn22b                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  default                     hello-node-connect-7799dfb7c6-vw2b9          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 coredns-5dd5756b68-726ql                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m54s
	  kube-system                 etcd-functional-323000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m7s
	  kube-system                 kube-apiserver-functional-323000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-functional-323000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                 kube-proxy-lpkpc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	  kube-system                 kube-scheduler-functional-323000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m53s                kube-proxy       
	  Normal  Starting                 72s                  kube-proxy       
	  Normal  Starting                 2m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m7s                 kubelet          Node functional-323000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  3m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    3m7s                 kubelet          Node functional-323000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s                 kubelet          Node functional-323000 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m7s                 kubelet          Starting kubelet.
	  Normal  NodeReady                3m3s                 kubelet          Node functional-323000 status is now: NodeReady
	  Normal  RegisteredNode           2m55s                node-controller  Node functional-323000 event: Registered Node functional-323000 in Controller
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node functional-323000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node functional-323000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m8s (x7 over 2m8s)  kubelet          Node functional-323000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           113s                 node-controller  Node functional-323000 event: Registered Node functional-323000 in Controller
	  Normal  Starting                 76s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)    kubelet          Node functional-323000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)    kubelet          Node functional-323000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x7 over 76s)    kubelet          Node functional-323000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           61s                  node-controller  Node functional-323000 event: Registered Node functional-323000 in Controller
	
	
	==> dmesg <==
	[  +3.449181] kauditd_printk_skb: 202 callbacks suppressed
	[ +11.753903] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.106293] systemd-fstab-generator[6540]: Ignoring "noauto" option for root device
	[ +18.067459] systemd-fstab-generator[7065]: Ignoring "noauto" option for root device
	[  +0.056089] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.099445] systemd-fstab-generator[7099]: Ignoring "noauto" option for root device
	[  +0.107301] systemd-fstab-generator[7111]: Ignoring "noauto" option for root device
	[  +0.092601] systemd-fstab-generator[7125]: Ignoring "noauto" option for root device
	[Mar 8 03:07] kauditd_printk_skb: 91 callbacks suppressed
	[  +6.294761] systemd-fstab-generator[7694]: Ignoring "noauto" option for root device
	[  +0.091834] systemd-fstab-generator[7706]: Ignoring "noauto" option for root device
	[  +0.083065] systemd-fstab-generator[7718]: Ignoring "noauto" option for root device
	[  +0.098606] systemd-fstab-generator[7733]: Ignoring "noauto" option for root device
	[  +0.216036] systemd-fstab-generator[7887]: Ignoring "noauto" option for root device
	[  +0.809165] systemd-fstab-generator[8008]: Ignoring "noauto" option for root device
	[  +3.484037] kauditd_printk_skb: 202 callbacks suppressed
	[ +11.572646] kauditd_printk_skb: 28 callbacks suppressed
	[  +3.309686] systemd-fstab-generator[9278]: Ignoring "noauto" option for root device
	[  +5.062614] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.026108] kauditd_printk_skb: 22 callbacks suppressed
	[  +9.012307] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.056667] kauditd_printk_skb: 14 callbacks suppressed
	[Mar 8 03:08] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.500385] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.973346] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [0756aa4b4ff5] <==
	{"level":"info","ts":"2024-03-08T03:06:19.486285Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T03:06:21.060847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-08T03:06:21.06099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-08T03:06:21.061034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-03-08T03:06:21.061301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-03-08T03:06:21.061384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-08T03:06:21.061614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-03-08T03:06:21.061706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-08T03:06:21.066541Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:06:21.066594Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:06:21.068911Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T03:06:21.069032Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-03-08T03:06:21.066536Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-323000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T03:06:21.098148Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T03:06:21.098294Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T03:06:57.443949Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-08T03:06:57.44398Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-323000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-03-08T03:06:57.444037Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T03:06:57.44408Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T03:06:57.462904Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T03:06:57.462932Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-08T03:06:57.462959Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-03-08T03:06:57.464317Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-08T03:06:57.464347Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-08T03:06:57.464351Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-323000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [47ca6ef0fdb8] <==
	{"level":"info","ts":"2024-03-08T03:07:10.866603Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T03:07:10.866634Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T03:07:10.86674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-03-08T03:07:10.866779Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-03-08T03:07:10.866826Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:07:10.866859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:07:10.869112Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T03:07:10.869377Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-08T03:07:10.86969Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-08T03:07:10.869783Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T03:07:10.86981Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T03:07:11.864288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-03-08T03:07:11.86437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-03-08T03:07:11.864411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-08T03:07:11.864476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-03-08T03:07:11.864489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-03-08T03:07:11.864506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-03-08T03:07:11.86452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-03-08T03:07:11.868254Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-323000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T03:07:11.868525Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:07:11.868553Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T03:07:11.868802Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T03:07:11.86859Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:07:11.870301Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T03:07:11.870328Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 03:08:26 up 3 min,  0 users,  load average: 0.77, 0.39, 0.15
	Linux functional-323000 5.10.207 #1 SMP PREEMPT Thu Feb 22 23:40:42 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d52c3056e1a5] <==
	I0308 03:07:12.525124       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0308 03:07:12.525175       1 aggregator.go:166] initial CRD sync complete...
	I0308 03:07:12.525192       1 autoregister_controller.go:141] Starting autoregister controller
	I0308 03:07:12.525221       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0308 03:07:12.525237       1 cache.go:39] Caches are synced for autoregister controller
	I0308 03:07:12.525381       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0308 03:07:12.525389       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0308 03:07:12.525437       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 03:07:12.525634       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0308 03:07:12.534799       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0308 03:07:12.551774       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 03:07:13.425338       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0308 03:07:13.628984       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0308 03:07:13.629504       1 controller.go:624] quota admission added evaluator for: endpoints
	I0308 03:07:13.632072       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0308 03:07:13.780413       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0308 03:07:13.785200       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0308 03:07:13.800413       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0308 03:07:13.808369       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0308 03:07:13.810624       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0308 03:07:33.462761       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.156.225"}
	I0308 03:07:38.112377       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.151.122"}
	I0308 03:07:47.515076       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0308 03:07:47.570123       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.56.147"}
	I0308 03:08:07.710915       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.200.205"}
	
	
	==> kube-controller-manager [844908b20b99] <==
	I0308 03:07:25.088655       1 shared_informer.go:318] Caches are synced for GC
	I0308 03:07:25.409004       1 shared_informer.go:318] Caches are synced for garbage collector
	I0308 03:07:25.421133       1 shared_informer.go:318] Caches are synced for garbage collector
	I0308 03:07:25.421146       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0308 03:07:43.956724       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0308 03:07:43.957647       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0308 03:07:47.516705       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-7799dfb7c6 to 1"
	I0308 03:07:47.526884       1 event.go:307] "Event occurred" object="default/hello-node-connect-7799dfb7c6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-7799dfb7c6-vw2b9"
	I0308 03:07:47.531042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="14.64008ms"
	I0308 03:07:47.545559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="14.488033ms"
	I0308 03:07:47.555561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="9.960018ms"
	I0308 03:07:47.555604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="27.001µs"
	I0308 03:07:59.500927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="28.126µs"
	I0308 03:08:00.497257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="26.959µs"
	I0308 03:08:01.517384       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="37.376µs"
	I0308 03:08:07.668746       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-759d89bdcc to 1"
	I0308 03:08:07.673586       1 event.go:307] "Event occurred" object="default/hello-node-759d89bdcc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-759d89bdcc-bn22b"
	I0308 03:08:07.678769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="10.163762ms"
	I0308 03:08:07.684254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="5.454724ms"
	I0308 03:08:07.684281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="11.083µs"
	I0308 03:08:08.550000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="18.418µs"
	I0308 03:08:09.556013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="24.793µs"
	I0308 03:08:14.602511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="39.876µs"
	I0308 03:08:22.153462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="29.209µs"
	I0308 03:08:22.650034       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="36.834µs"
	
	
	==> kube-controller-manager [eeb0fab0784a] <==
	I0308 03:06:33.809674       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0308 03:06:33.809699       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0308 03:06:33.809737       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0308 03:06:33.810874       1 shared_informer.go:318] Caches are synced for expand
	I0308 03:06:33.810892       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0308 03:06:33.812030       1 shared_informer.go:318] Caches are synced for TTL
	I0308 03:06:33.814198       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0308 03:06:33.816402       1 shared_informer.go:318] Caches are synced for GC
	I0308 03:06:33.825358       1 shared_informer.go:318] Caches are synced for cronjob
	I0308 03:06:33.828561       1 shared_informer.go:318] Caches are synced for persistent volume
	I0308 03:06:33.829653       1 shared_informer.go:318] Caches are synced for PVC protection
	I0308 03:06:33.830773       1 shared_informer.go:318] Caches are synced for endpoint
	I0308 03:06:33.832993       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0308 03:06:33.833089       1 shared_informer.go:318] Caches are synced for taint
	I0308 03:06:33.833145       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0308 03:06:33.833165       1 taint_manager.go:210] "Sending events to api server"
	I0308 03:06:33.833301       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0308 03:06:33.833422       1 event.go:307] "Event occurred" object="functional-323000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-323000 event: Registered Node functional-323000 in Controller"
	I0308 03:06:33.833445       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-323000"
	I0308 03:06:33.833798       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0308 03:06:34.006242       1 shared_informer.go:318] Caches are synced for resource quota
	I0308 03:06:34.035338       1 shared_informer.go:318] Caches are synced for resource quota
	I0308 03:06:34.355894       1 shared_informer.go:318] Caches are synced for garbage collector
	I0308 03:06:34.427579       1 shared_informer.go:318] Caches are synced for garbage collector
	I0308 03:06:34.427602       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-proxy [028bf7da9926] <==
	I0308 03:07:13.764849       1 server_others.go:69] "Using iptables proxy"
	I0308 03:07:13.773376       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0308 03:07:13.788752       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 03:07:13.788912       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 03:07:13.790527       1 server_others.go:152] "Using iptables Proxier"
	I0308 03:07:13.790583       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 03:07:13.790670       1 server.go:846] "Version info" version="v1.28.4"
	I0308 03:07:13.790743       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:07:13.791047       1 config.go:188] "Starting service config controller"
	I0308 03:07:13.791075       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 03:07:13.791095       1 config.go:97] "Starting endpoint slice config controller"
	I0308 03:07:13.791111       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 03:07:13.791314       1 config.go:315] "Starting node config controller"
	I0308 03:07:13.791334       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 03:07:13.891630       1 shared_informer.go:318] Caches are synced for node config
	I0308 03:07:13.891630       1 shared_informer.go:318] Caches are synced for service config
	I0308 03:07:13.891646       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [4be87bbd9d27] <==
	I0308 03:06:22.241792       1 server_others.go:69] "Using iptables proxy"
	I0308 03:06:22.249375       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0308 03:06:22.257421       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 03:06:22.257433       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 03:06:22.258054       1 server_others.go:152] "Using iptables Proxier"
	I0308 03:06:22.258110       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 03:06:22.258165       1 server.go:846] "Version info" version="v1.28.4"
	I0308 03:06:22.258171       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:06:22.258497       1 config.go:188] "Starting service config controller"
	I0308 03:06:22.258511       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 03:06:22.258518       1 config.go:97] "Starting endpoint slice config controller"
	I0308 03:06:22.258520       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 03:06:22.258699       1 config.go:315] "Starting node config controller"
	I0308 03:06:22.258702       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 03:06:22.359139       1 shared_informer.go:318] Caches are synced for node config
	I0308 03:06:22.359151       1 shared_informer.go:318] Caches are synced for service config
	I0308 03:06:22.359162       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [08af161f5dff] <==
	I0308 03:07:11.419875       1 serving.go:348] Generated self-signed cert in-memory
	I0308 03:07:12.468411       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0308 03:07:12.468426       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:07:12.472937       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0308 03:07:12.472950       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0308 03:07:12.472967       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0308 03:07:12.472970       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 03:07:12.472975       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0308 03:07:12.472979       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0308 03:07:12.474154       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0308 03:07:12.474171       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 03:07:12.573966       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0308 03:07:12.574108       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0308 03:07:12.574196       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [199dbcd67f5e] <==
	I0308 03:06:20.105868       1 serving.go:348] Generated self-signed cert in-memory
	W0308 03:06:21.660193       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0308 03:06:21.660245       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 03:06:21.660254       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0308 03:06:21.660261       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0308 03:06:21.697048       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0308 03:06:21.697100       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:06:21.697727       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0308 03:06:21.697744       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 03:06:21.698299       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0308 03:06:21.698333       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 03:06:21.798689       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 03:06:57.434344       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0308 03:06:57.434552       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 08 03:08:09 functional-323000 kubelet[8015]: E0308 03:08:09.550032    8015 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-bn22b_default(2e6839ce-708d-480c-94b5-7e20aafbbe1d)\"" pod="default/hello-node-759d89bdcc-bn22b" podUID="2e6839ce-708d-480c-94b5-7e20aafbbe1d"
	Mar 08 03:08:10 functional-323000 kubelet[8015]: E0308 03:08:10.153755    8015 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:08:10 functional-323000 kubelet[8015]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:08:10 functional-323000 kubelet[8015]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:08:10 functional-323000 kubelet[8015]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:08:10 functional-323000 kubelet[8015]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:08:10 functional-323000 kubelet[8015]: I0308 03:08:10.230618    8015 scope.go:117] "RemoveContainer" containerID="301da11aa4796d3a65550d675e58569b7b228bdc59933be680cedc8ad3eba686"
	Mar 08 03:08:14 functional-323000 kubelet[8015]: I0308 03:08:14.147462    8015 scope.go:117] "RemoveContainer" containerID="e58a9395ca2e731686f6174d42e3c4e281c9d7bb891000321ee66ba67227957a"
	Mar 08 03:08:14 functional-323000 kubelet[8015]: I0308 03:08:14.590221    8015 scope.go:117] "RemoveContainer" containerID="e58a9395ca2e731686f6174d42e3c4e281c9d7bb891000321ee66ba67227957a"
	Mar 08 03:08:14 functional-323000 kubelet[8015]: I0308 03:08:14.590452    8015 scope.go:117] "RemoveContainer" containerID="a00cc681572349edcbd7a09804c5754c5de30d41661fb8c96239c73714e89b91"
	Mar 08 03:08:14 functional-323000 kubelet[8015]: E0308 03:08:14.590600    8015 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-vw2b9_default(6e2db38d-1590-48c5-9265-d1f23f7bf32b)\"" pod="default/hello-node-connect-7799dfb7c6-vw2b9" podUID="6e2db38d-1590-48c5-9265-d1f23f7bf32b"
	Mar 08 03:08:16 functional-323000 kubelet[8015]: I0308 03:08:16.411065    8015 topology_manager.go:215] "Topology Admit Handler" podUID="29e7340c-6b6e-40bc-9ae0-e7c171936e97" podNamespace="default" podName="busybox-mount"
	Mar 08 03:08:16 functional-323000 kubelet[8015]: I0308 03:08:16.552772    8015 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/29e7340c-6b6e-40bc-9ae0-e7c171936e97-test-volume\") pod \"busybox-mount\" (UID: \"29e7340c-6b6e-40bc-9ae0-e7c171936e97\") " pod="default/busybox-mount"
	Mar 08 03:08:16 functional-323000 kubelet[8015]: I0308 03:08:16.552796    8015 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k64c\" (UniqueName: \"kubernetes.io/projected/29e7340c-6b6e-40bc-9ae0-e7c171936e97-kube-api-access-9k64c\") pod \"busybox-mount\" (UID: \"29e7340c-6b6e-40bc-9ae0-e7c171936e97\") " pod="default/busybox-mount"
	Mar 08 03:08:22 functional-323000 kubelet[8015]: I0308 03:08:22.147615    8015 scope.go:117] "RemoveContainer" containerID="636bd3007d510b26d957700ec4edb7995c92ba167b43b03e4bf7d1801e1263cb"
	Mar 08 03:08:22 functional-323000 kubelet[8015]: I0308 03:08:22.642162    8015 scope.go:117] "RemoveContainer" containerID="636bd3007d510b26d957700ec4edb7995c92ba167b43b03e4bf7d1801e1263cb"
	Mar 08 03:08:22 functional-323000 kubelet[8015]: I0308 03:08:22.642319    8015 scope.go:117] "RemoveContainer" containerID="da3b296944ead8237358857516dbcef4ac3bbb642196521777eddfdc792e1e88"
	Mar 08 03:08:22 functional-323000 kubelet[8015]: E0308 03:08:22.642406    8015 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-bn22b_default(2e6839ce-708d-480c-94b5-7e20aafbbe1d)\"" pod="default/hello-node-759d89bdcc-bn22b" podUID="2e6839ce-708d-480c-94b5-7e20aafbbe1d"
	Mar 08 03:08:23 functional-323000 kubelet[8015]: I0308 03:08:23.899940    8015 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k64c\" (UniqueName: \"kubernetes.io/projected/29e7340c-6b6e-40bc-9ae0-e7c171936e97-kube-api-access-9k64c\") pod \"29e7340c-6b6e-40bc-9ae0-e7c171936e97\" (UID: \"29e7340c-6b6e-40bc-9ae0-e7c171936e97\") "
	Mar 08 03:08:23 functional-323000 kubelet[8015]: I0308 03:08:23.899960    8015 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/29e7340c-6b6e-40bc-9ae0-e7c171936e97-test-volume\") pod \"29e7340c-6b6e-40bc-9ae0-e7c171936e97\" (UID: \"29e7340c-6b6e-40bc-9ae0-e7c171936e97\") "
	Mar 08 03:08:23 functional-323000 kubelet[8015]: I0308 03:08:23.899994    8015 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e7340c-6b6e-40bc-9ae0-e7c171936e97-test-volume" (OuterVolumeSpecName: "test-volume") pod "29e7340c-6b6e-40bc-9ae0-e7c171936e97" (UID: "29e7340c-6b6e-40bc-9ae0-e7c171936e97"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 08 03:08:23 functional-323000 kubelet[8015]: I0308 03:08:23.901233    8015 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e7340c-6b6e-40bc-9ae0-e7c171936e97-kube-api-access-9k64c" (OuterVolumeSpecName: "kube-api-access-9k64c") pod "29e7340c-6b6e-40bc-9ae0-e7c171936e97" (UID: "29e7340c-6b6e-40bc-9ae0-e7c171936e97"). InnerVolumeSpecName "kube-api-access-9k64c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 08 03:08:24 functional-323000 kubelet[8015]: I0308 03:08:24.000841    8015 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9k64c\" (UniqueName: \"kubernetes.io/projected/29e7340c-6b6e-40bc-9ae0-e7c171936e97-kube-api-access-9k64c\") on node \"functional-323000\" DevicePath \"\""
	Mar 08 03:08:24 functional-323000 kubelet[8015]: I0308 03:08:24.000854    8015 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/29e7340c-6b6e-40bc-9ae0-e7c171936e97-test-volume\") on node \"functional-323000\" DevicePath \"\""
	Mar 08 03:08:24 functional-323000 kubelet[8015]: I0308 03:08:24.653700    8015 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f8b8dd8f4689c003d226e40302f3abd4fa85d4889b228be385f814df3d53cf"
	
	
	==> storage-provisioner [1eedf39f3e39] <==
	I0308 03:07:13.682750       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0308 03:07:13.684151       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [5392536c1314] <==
	I0308 03:07:28.195558       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0308 03:07:28.199843       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0308 03:07:28.199859       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0308 03:07:45.588297       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0308 03:07:45.588369       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-323000_673284e0-bf4f-420c-960d-f24334842ce4!
	I0308 03:07:45.588660       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35dae3f1-c526-4d86-a5fc-8446d7b0aa38", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-323000_673284e0-bf4f-420c-960d-f24334842ce4 became leader
	I0308 03:07:45.689053       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-323000_673284e0-bf4f-420c-960d-f24334842ce4!
	I0308 03:07:45.689120       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0308 03:07:45.689363       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f9ab6e49-d888-4d1b-9814-cc4e05e49691", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0308 03:07:45.689218       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    a9f64b7d-1e52-46e0-85fa-d4303c636440 362 0 2024-03-08 03:05:33 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-03-08 03:05:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f9ab6e49-d888-4d1b-9814-cc4e05e49691 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f9ab6e49-d888-4d1b-9814-cc4e05e49691 683 0 2024-03-08 03:07:43 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-03-08 03:07:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-03-08 03:07:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0308 03:07:45.689770       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f9ab6e49-d888-4d1b-9814-cc4e05e49691" provisioned
	I0308 03:07:45.689797       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0308 03:07:45.689813       1 volume_store.go:212] Trying to save persistentvolume "pvc-f9ab6e49-d888-4d1b-9814-cc4e05e49691"
	I0308 03:07:45.711560       1 volume_store.go:219] persistentvolume "pvc-f9ab6e49-d888-4d1b-9814-cc4e05e49691" saved
	I0308 03:07:45.711791       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f9ab6e49-d888-4d1b-9814-cc4e05e49691", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f9ab6e49-d888-4d1b-9814-cc4e05e49691
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-323000 -n functional-323000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-323000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-323000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-323000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-323000/192.168.105.4
	Start Time:       Thu, 07 Mar 2024 19:08:16 -0800
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://b4187cc3444f8de15749c55e2e01843156694b18fc9f96bdae5e85503c0e9e4b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 07 Mar 2024 19:08:22 -0800
	      Finished:     Thu, 07 Mar 2024 19:08:22 -0800
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9k64c (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-9k64c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10s   default-scheduler  Successfully assigned default/busybox-mount to functional-323000
	  Normal  Pulling    11s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 5.48s (5.48s including waiting)
	  Normal  Created    5s    kubelet            Created container mount-munger
	  Normal  Started    5s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (39.67s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-501000 node stop m02 -v=7 --alsologtostderr: (12.192180083s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr
E0307 19:15:21.636187    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:15:44.735075    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr: exit status 7 (2m55.972771166s)

                                                
                                                
-- stdout --
	ha-501000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-501000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-501000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-501000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:14:22.361790    3094 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:14:22.362063    3094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:14:22.362067    3094 out.go:304] Setting ErrFile to fd 2...
	I0307 19:14:22.362070    3094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:14:22.362181    3094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:14:22.362306    3094 out.go:298] Setting JSON to false
	I0307 19:14:22.362319    3094 mustload.go:65] Loading cluster: ha-501000
	I0307 19:14:22.362336    3094 notify.go:220] Checking for updates...
	I0307 19:14:22.362544    3094 config.go:182] Loaded profile config "ha-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:14:22.362550    3094 status.go:255] checking status of ha-501000 ...
	I0307 19:14:22.363327    3094 status.go:330] ha-501000 host status = "Running" (err=<nil>)
	I0307 19:14:22.363344    3094 host.go:66] Checking if "ha-501000" exists ...
	I0307 19:14:22.363460    3094 host.go:66] Checking if "ha-501000" exists ...
	I0307 19:14:22.363720    3094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 19:14:22.363728    3094 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/id_rsa Username:docker}
	W0307 19:14:48.288329    3094 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0307 19:14:48.288491    3094 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 19:14:48.288512    3094 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0307 19:14:48.288520    3094 status.go:257] ha-501000 status: &{Name:ha-501000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 19:14:48.288535    3094 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0307 19:14:48.288541    3094 status.go:255] checking status of ha-501000-m02 ...
	I0307 19:14:48.288797    3094 status.go:330] ha-501000-m02 host status = "Stopped" (err=<nil>)
	I0307 19:14:48.288804    3094 status.go:343] host is not running, skipping remaining checks
	I0307 19:14:48.288807    3094 status.go:257] ha-501000-m02 status: &{Name:ha-501000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 19:14:48.288813    3094 status.go:255] checking status of ha-501000-m03 ...
	I0307 19:14:48.292042    3094 status.go:330] ha-501000-m03 host status = "Running" (err=<nil>)
	I0307 19:14:48.292058    3094 host.go:66] Checking if "ha-501000-m03" exists ...
	I0307 19:14:48.292228    3094 host.go:66] Checking if "ha-501000-m03" exists ...
	I0307 19:14:48.292360    3094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 19:14:48.292370    3094 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m03/id_rsa Username:docker}
	W0307 19:16:03.292407    3094 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0307 19:16:03.292464    3094 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0307 19:16:03.292473    3094 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0307 19:16:03.292477    3094 status.go:257] ha-501000-m03 status: &{Name:ha-501000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 19:16:03.292485    3094 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0307 19:16:03.292494    3094 status.go:255] checking status of ha-501000-m04 ...
	I0307 19:16:03.293302    3094 status.go:330] ha-501000-m04 host status = "Running" (err=<nil>)
	I0307 19:16:03.293310    3094 host.go:66] Checking if "ha-501000-m04" exists ...
	I0307 19:16:03.293428    3094 host.go:66] Checking if "ha-501000-m04" exists ...
	I0307 19:16:03.293555    3094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 19:16:03.293561    3094 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m04/id_rsa Username:docker}
	W0307 19:17:18.292738    3094 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0307 19:17:18.292782    3094 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0307 19:17:18.292793    3094 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0307 19:17:18.292797    3094 status.go:257] ha-501000-m04 status: &{Name:ha-501000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0307 19:17:18.292808    3094 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr": ha-501000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-501000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-501000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-501000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr": ha-501000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-501000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-501000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-501000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr": ha-501000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-501000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-501000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-501000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000
E0307 19:17:37.768896    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000: exit status 3 (25.956913708s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 19:17:44.248814    3168 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 19:17:44.248826    3168 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-501000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.51s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0307 19:18:05.471145    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.549734834s)
ha_test.go:413: expected profile "ha-501000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-501000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-501000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-501000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"
\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000: exit status 3 (25.955184625s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 19:19:28.750685    3221 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 19:19:28.750698    3221 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-501000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.51s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (209.02s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-501000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.086226125s)

                                                
                                                
-- stdout --
	* Starting "ha-501000-m02" control-plane node in "ha-501000" cluster
	* Restarting existing qemu2 VM for "ha-501000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-501000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:19:28.786186    3232 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:19:28.786434    3232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:19:28.786438    3232 out.go:304] Setting ErrFile to fd 2...
	I0307 19:19:28.786440    3232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:19:28.786577    3232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:19:28.786817    3232 mustload.go:65] Loading cluster: ha-501000
	I0307 19:19:28.787056    3232 config.go:182] Loaded profile config "ha-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0307 19:19:28.787285    3232 host.go:58] "ha-501000-m02" host status: Stopped
	I0307 19:19:28.791852    3232 out.go:177] * Starting "ha-501000-m02" control-plane node in "ha-501000" cluster
	I0307 19:19:28.795850    3232 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:19:28.795867    3232 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:19:28.795874    3232 cache.go:56] Caching tarball of preloaded images
	I0307 19:19:28.795950    3232 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:19:28.795956    3232 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:19:28.796015    3232 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/ha-501000/config.json ...
	I0307 19:19:28.796449    3232 start.go:360] acquireMachinesLock for ha-501000-m02: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:19:28.796488    3232 start.go:364] duration metric: took 25.167µs to acquireMachinesLock for "ha-501000-m02"
	I0307 19:19:28.796496    3232 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:19:28.796499    3232 fix.go:54] fixHost starting: m02
	I0307 19:19:28.796641    3232 fix.go:112] recreateIfNeeded on ha-501000-m02: state=Stopped err=<nil>
	W0307 19:19:28.796647    3232 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:19:28.800791    3232 out.go:177] * Restarting existing qemu2 VM for "ha-501000-m02" ...
	I0307 19:19:28.804872    3232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:10:10:0c:d7:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/disk.qcow2
	I0307 19:19:28.807547    3232 main.go:141] libmachine: STDOUT: 
	I0307 19:19:28.807567    3232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:19:28.807601    3232 fix.go:56] duration metric: took 11.100417ms for fixHost
	I0307 19:19:28.807604    3232 start.go:83] releasing machines lock for "ha-501000-m02", held for 11.112833ms
	W0307 19:19:28.807611    3232 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:19:28.807632    3232 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:19:28.807636    3232 start.go:728] Will try again in 5 seconds ...
	I0307 19:19:33.808788    3232 start.go:360] acquireMachinesLock for ha-501000-m02: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:19:33.808915    3232 start.go:364] duration metric: took 91.166µs to acquireMachinesLock for "ha-501000-m02"
	I0307 19:19:33.808949    3232 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:19:33.808953    3232 fix.go:54] fixHost starting: m02
	I0307 19:19:33.809138    3232 fix.go:112] recreateIfNeeded on ha-501000-m02: state=Stopped err=<nil>
	W0307 19:19:33.809144    3232 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:19:33.813277    3232 out.go:177] * Restarting existing qemu2 VM for "ha-501000-m02" ...
	I0307 19:19:33.817242    3232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:10:10:0c:d7:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/disk.qcow2
	I0307 19:19:33.819252    3232 main.go:141] libmachine: STDOUT: 
	I0307 19:19:33.819269    3232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:19:33.819293    3232 fix.go:56] duration metric: took 10.336166ms for fixHost
	I0307 19:19:33.819297    3232 start.go:83] releasing machines lock for "ha-501000-m02", held for 10.378083ms
	W0307 19:19:33.819328    3232 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-501000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-501000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:19:33.824308    3232 out.go:177] 
	W0307 19:19:33.827253    3232 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:19:33.827259    3232 out.go:239] * 
	* 
	W0307 19:19:33.828900    3232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:19:33.833284    3232 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0307 19:19:28.786186    3232 out.go:291] Setting OutFile to fd 1 ...
I0307 19:19:28.786434    3232 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:19:28.786438    3232 out.go:304] Setting ErrFile to fd 2...
I0307 19:19:28.786440    3232 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:19:28.786577    3232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
I0307 19:19:28.786817    3232 mustload.go:65] Loading cluster: ha-501000
I0307 19:19:28.787056    3232 config.go:182] Loaded profile config "ha-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
W0307 19:19:28.787285    3232 host.go:58] "ha-501000-m02" host status: Stopped
I0307 19:19:28.791852    3232 out.go:177] * Starting "ha-501000-m02" control-plane node in "ha-501000" cluster
I0307 19:19:28.795850    3232 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0307 19:19:28.795867    3232 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0307 19:19:28.795874    3232 cache.go:56] Caching tarball of preloaded images
I0307 19:19:28.795950    3232 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0307 19:19:28.795956    3232 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0307 19:19:28.796015    3232 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/ha-501000/config.json ...
I0307 19:19:28.796449    3232 start.go:360] acquireMachinesLock for ha-501000-m02: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 19:19:28.796488    3232 start.go:364] duration metric: took 25.167µs to acquireMachinesLock for "ha-501000-m02"
I0307 19:19:28.796496    3232 start.go:96] Skipping create...Using existing machine configuration
I0307 19:19:28.796499    3232 fix.go:54] fixHost starting: m02
I0307 19:19:28.796641    3232 fix.go:112] recreateIfNeeded on ha-501000-m02: state=Stopped err=<nil>
W0307 19:19:28.796647    3232 fix.go:138] unexpected machine state, will restart: <nil>
I0307 19:19:28.800791    3232 out.go:177] * Restarting existing qemu2 VM for "ha-501000-m02" ...
I0307 19:19:28.804872    3232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:10:10:0c:d7:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/disk.qcow2
I0307 19:19:28.807547    3232 main.go:141] libmachine: STDOUT: 
I0307 19:19:28.807567    3232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0307 19:19:28.807601    3232 fix.go:56] duration metric: took 11.100417ms for fixHost
I0307 19:19:28.807604    3232 start.go:83] releasing machines lock for "ha-501000-m02", held for 11.112833ms
W0307 19:19:28.807611    3232 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0307 19:19:28.807632    3232 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0307 19:19:28.807636    3232 start.go:728] Will try again in 5 seconds ...
I0307 19:19:33.808788    3232 start.go:360] acquireMachinesLock for ha-501000-m02: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 19:19:33.808915    3232 start.go:364] duration metric: took 91.166µs to acquireMachinesLock for "ha-501000-m02"
I0307 19:19:33.808949    3232 start.go:96] Skipping create...Using existing machine configuration
I0307 19:19:33.808953    3232 fix.go:54] fixHost starting: m02
I0307 19:19:33.809138    3232 fix.go:112] recreateIfNeeded on ha-501000-m02: state=Stopped err=<nil>
W0307 19:19:33.809144    3232 fix.go:138] unexpected machine state, will restart: <nil>
I0307 19:19:33.813277    3232 out.go:177] * Restarting existing qemu2 VM for "ha-501000-m02" ...
I0307 19:19:33.817242    3232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:10:10:0c:d7:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m02/disk.qcow2
I0307 19:19:33.819252    3232 main.go:141] libmachine: STDOUT: 
I0307 19:19:33.819269    3232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0307 19:19:33.819293    3232 fix.go:56] duration metric: took 10.336166ms for fixHost
I0307 19:19:33.819297    3232 start.go:83] releasing machines lock for "ha-501000-m02", held for 10.378083ms
W0307 19:19:33.819328    3232 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-501000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-501000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0307 19:19:33.824308    3232 out.go:177] 
W0307 19:19:33.827253    3232 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0307 19:19:33.827259    3232 out.go:239] * 
* 
W0307 19:19:33.828900    3232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0307 19:19:33.833284    3232 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-501000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr
E0307 19:20:44.724354    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:22:07.785668    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr: exit status 7 (2m57.936608s)

                                                
                                                
-- stdout --
	ha-501000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-501000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-501000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-501000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:19:33.872425    3236 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:19:33.872583    3236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:19:33.872587    3236 out.go:304] Setting ErrFile to fd 2...
	I0307 19:19:33.872589    3236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:19:33.872719    3236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:19:33.872838    3236 out.go:298] Setting JSON to false
	I0307 19:19:33.872855    3236 mustload.go:65] Loading cluster: ha-501000
	I0307 19:19:33.872894    3236 notify.go:220] Checking for updates...
	I0307 19:19:33.873062    3236 config.go:182] Loaded profile config "ha-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:19:33.873067    3236 status.go:255] checking status of ha-501000 ...
	I0307 19:19:33.873848    3236 status.go:330] ha-501000 host status = "Running" (err=<nil>)
	I0307 19:19:33.873858    3236 host.go:66] Checking if "ha-501000" exists ...
	I0307 19:19:33.873953    3236 host.go:66] Checking if "ha-501000" exists ...
	I0307 19:19:33.874054    3236 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 19:19:33.874064    3236 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/id_rsa Username:docker}
	W0307 19:19:33.874245    3236 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0307 19:19:33.874259    3236 retry.go:31] will retry after 150.146734ms: dial tcp 192.168.105.5:22: connect: host is down
	W0307 19:19:34.026514    3236 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0307 19:19:34.026530    3236 retry.go:31] will retry after 498.157553ms: dial tcp 192.168.105.5:22: connect: host is down
	W0307 19:19:34.525721    3236 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0307 19:19:34.525741    3236 retry.go:31] will retry after 313.452283ms: dial tcp 192.168.105.5:22: connect: host is down
	W0307 19:19:34.841113    3236 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0307 19:19:34.841139    3236 retry.go:31] will retry after 993.241337ms: dial tcp 192.168.105.5:22: connect: host is down
	W0307 19:20:01.760529    3236 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0307 19:20:01.760600    3236 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 19:20:01.760610    3236 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0307 19:20:01.760615    3236 status.go:257] ha-501000 status: &{Name:ha-501000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 19:20:01.760629    3236 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0307 19:20:01.760635    3236 status.go:255] checking status of ha-501000-m02 ...
	I0307 19:20:01.760851    3236 status.go:330] ha-501000-m02 host status = "Stopped" (err=<nil>)
	I0307 19:20:01.760856    3236 status.go:343] host is not running, skipping remaining checks
	I0307 19:20:01.760858    3236 status.go:257] ha-501000-m02 status: &{Name:ha-501000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 19:20:01.760863    3236 status.go:255] checking status of ha-501000-m03 ...
	I0307 19:20:01.761556    3236 status.go:330] ha-501000-m03 host status = "Running" (err=<nil>)
	I0307 19:20:01.761563    3236 host.go:66] Checking if "ha-501000-m03" exists ...
	I0307 19:20:01.761675    3236 host.go:66] Checking if "ha-501000-m03" exists ...
	I0307 19:20:01.761813    3236 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 19:20:01.761819    3236 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m03/id_rsa Username:docker}
	W0307 19:21:16.760987    3236 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0307 19:21:16.761029    3236 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0307 19:21:16.761039    3236 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0307 19:21:16.761045    3236 status.go:257] ha-501000-m03 status: &{Name:ha-501000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 19:21:16.761052    3236 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0307 19:21:16.761056    3236 status.go:255] checking status of ha-501000-m04 ...
	I0307 19:21:16.761843    3236 status.go:330] ha-501000-m04 host status = "Running" (err=<nil>)
	I0307 19:21:16.761851    3236 host.go:66] Checking if "ha-501000-m04" exists ...
	I0307 19:21:16.761958    3236 host.go:66] Checking if "ha-501000-m04" exists ...
	I0307 19:21:16.762076    3236 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 19:21:16.762083    3236 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000-m04/id_rsa Username:docker}
	W0307 19:22:31.761257    3236 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0307 19:22:31.761445    3236 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0307 19:22:31.761490    3236 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0307 19:22:31.761511    3236 status.go:257] ha-501000-m04 status: &{Name:ha-501000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0307 19:22:31.761557    3236 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000
E0307 19:22:37.757951    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000: exit status 3 (26.000257s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 19:22:57.766760    3302 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 19:22:57.766779    3302 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-501000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/RestartSecondaryNode (209.02s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-501000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-501000 -v=7 --alsologtostderr
E0307 19:25:44.706128    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:27:37.728428    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-501000 -v=7 --alsologtostderr: (3m49.015374625s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-501000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-501000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.226135375s)

                                                
                                                
-- stdout --
	* [ha-501000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-501000" primary control-plane node in "ha-501000" cluster
	* Restarting existing qemu2 VM for "ha-501000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-501000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:28:07.267174    3485 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:28:07.267343    3485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:28:07.267347    3485 out.go:304] Setting ErrFile to fd 2...
	I0307 19:28:07.267350    3485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:28:07.267518    3485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:28:07.268733    3485 out.go:298] Setting JSON to false
	I0307 19:28:07.288611    3485 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3459,"bootTime":1709865028,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:28:07.288665    3485 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:28:07.294228    3485 out.go:177] * [ha-501000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:28:07.301123    3485 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:28:07.301226    3485 notify.go:220] Checking for updates...
	I0307 19:28:07.305284    3485 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:28:07.308245    3485 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:28:07.311257    3485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:28:07.314258    3485 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:28:07.317285    3485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:28:07.320606    3485 config.go:182] Loaded profile config "ha-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:28:07.320664    3485 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:28:07.325232    3485 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:28:07.332244    3485 start.go:297] selected driver: qemu2
	I0307 19:28:07.332253    3485 start.go:901] validating driver "qemu2" against &{Name:ha-501000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-501000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:28:07.332348    3485 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:28:07.335343    3485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:28:07.335387    3485 cni.go:84] Creating CNI manager for ""
	I0307 19:28:07.335393    3485 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0307 19:28:07.335434    3485 start.go:340] cluster config:
	{Name:ha-501000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-501000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:28:07.340916    3485 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:28:07.349114    3485 out.go:177] * Starting "ha-501000" primary control-plane node in "ha-501000" cluster
	I0307 19:28:07.353246    3485 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:28:07.353261    3485 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:28:07.353275    3485 cache.go:56] Caching tarball of preloaded images
	I0307 19:28:07.353346    3485 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:28:07.353352    3485 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:28:07.353446    3485 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/ha-501000/config.json ...
	I0307 19:28:07.353913    3485 start.go:360] acquireMachinesLock for ha-501000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:28:07.353948    3485 start.go:364] duration metric: took 28.459µs to acquireMachinesLock for "ha-501000"
	I0307 19:28:07.353957    3485 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:28:07.353963    3485 fix.go:54] fixHost starting: 
	I0307 19:28:07.354087    3485 fix.go:112] recreateIfNeeded on ha-501000: state=Stopped err=<nil>
	W0307 19:28:07.354097    3485 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:28:07.356011    3485 out.go:177] * Restarting existing qemu2 VM for "ha-501000" ...
	I0307 19:28:07.364325    3485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:11:d1:33:a0:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/disk.qcow2
	I0307 19:28:07.366358    3485 main.go:141] libmachine: STDOUT: 
	I0307 19:28:07.366380    3485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:28:07.366409    3485 fix.go:56] duration metric: took 12.444583ms for fixHost
	I0307 19:28:07.366414    3485 start.go:83] releasing machines lock for "ha-501000", held for 12.46175ms
	W0307 19:28:07.366420    3485 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:28:07.366453    3485 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:28:07.366458    3485 start.go:728] Will try again in 5 seconds ...
	I0307 19:28:12.368550    3485 start.go:360] acquireMachinesLock for ha-501000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:28:12.369051    3485 start.go:364] duration metric: took 381.75µs to acquireMachinesLock for "ha-501000"
	I0307 19:28:12.369190    3485 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:28:12.369216    3485 fix.go:54] fixHost starting: 
	I0307 19:28:12.369959    3485 fix.go:112] recreateIfNeeded on ha-501000: state=Stopped err=<nil>
	W0307 19:28:12.369989    3485 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:28:12.375464    3485 out.go:177] * Restarting existing qemu2 VM for "ha-501000" ...
	I0307 19:28:12.380670    3485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:11:d1:33:a0:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/disk.qcow2
	I0307 19:28:12.390389    3485 main.go:141] libmachine: STDOUT: 
	I0307 19:28:12.390448    3485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:28:12.390536    3485 fix.go:56] duration metric: took 21.3265ms for fixHost
	I0307 19:28:12.390558    3485 start.go:83] releasing machines lock for "ha-501000", held for 21.483542ms
	W0307 19:28:12.390719    3485 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-501000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-501000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:28:12.398407    3485 out.go:177] 
	W0307 19:28:12.402461    3485 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:28:12.402488    3485 out.go:239] * 
	* 
	W0307 19:28:12.404915    3485 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:28:12.415380    3485 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-501000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-501000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000: exit status 7 (34.153709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-501000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.682125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-501000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-501000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:28:12.560695    3500 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:28:12.560946    3500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:28:12.560949    3500 out.go:304] Setting ErrFile to fd 2...
	I0307 19:28:12.560952    3500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:28:12.561079    3500 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:28:12.561305    3500 mustload.go:65] Loading cluster: ha-501000
	I0307 19:28:12.561511    3500 config.go:182] Loaded profile config "ha-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0307 19:28:12.561796    3500 out.go:239] ! The control-plane node ha-501000 host is not running (will try others): state=Stopped
	! The control-plane node ha-501000 host is not running (will try others): state=Stopped
	W0307 19:28:12.561900    3500 out.go:239] ! The control-plane node ha-501000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-501000-m02 host is not running (will try others): state=Stopped
	I0307 19:28:12.566052    3500 out.go:177] * The control-plane node ha-501000-m03 host is not running: state=Stopped
	I0307 19:28:12.569019    3500 out.go:177]   To start a cluster, run: "minikube start -p ha-501000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-501000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr: exit status 7 (31.682625ms)

                                                
                                                
-- stdout --
	ha-501000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-501000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-501000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-501000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:28:12.601077    3502 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:28:12.601241    3502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:28:12.601244    3502 out.go:304] Setting ErrFile to fd 2...
	I0307 19:28:12.601246    3502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:28:12.601368    3502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:28:12.601497    3502 out.go:298] Setting JSON to false
	I0307 19:28:12.601509    3502 mustload.go:65] Loading cluster: ha-501000
	I0307 19:28:12.601569    3502 notify.go:220] Checking for updates...
	I0307 19:28:12.601728    3502 config.go:182] Loaded profile config "ha-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:28:12.601735    3502 status.go:255] checking status of ha-501000 ...
	I0307 19:28:12.601932    3502 status.go:330] ha-501000 host status = "Stopped" (err=<nil>)
	I0307 19:28:12.601935    3502 status.go:343] host is not running, skipping remaining checks
	I0307 19:28:12.601937    3502 status.go:257] ha-501000 status: &{Name:ha-501000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 19:28:12.601951    3502 status.go:255] checking status of ha-501000-m02 ...
	I0307 19:28:12.602040    3502 status.go:330] ha-501000-m02 host status = "Stopped" (err=<nil>)
	I0307 19:28:12.602043    3502 status.go:343] host is not running, skipping remaining checks
	I0307 19:28:12.602045    3502 status.go:257] ha-501000-m02 status: &{Name:ha-501000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 19:28:12.602050    3502 status.go:255] checking status of ha-501000-m03 ...
	I0307 19:28:12.602134    3502 status.go:330] ha-501000-m03 host status = "Stopped" (err=<nil>)
	I0307 19:28:12.602137    3502 status.go:343] host is not running, skipping remaining checks
	I0307 19:28:12.602139    3502 status.go:257] ha-501000-m03 status: &{Name:ha-501000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 19:28:12.602143    3502 status.go:255] checking status of ha-501000-m04 ...
	I0307 19:28:12.602234    3502 status.go:330] ha-501000-m04 host status = "Stopped" (err=<nil>)
	I0307 19:28:12.602237    3502 status.go:343] host is not running, skipping remaining checks
	I0307 19:28:12.602239    3502 status.go:257] ha-501000-m04 status: &{Name:ha-501000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000: exit status 7 (31.174916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.08s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2.027920792s)
ha_test.go:413: expected profile "ha-501000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-501000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-501000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-501000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountI
P\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000: exit status 7 (50.028333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.08s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (202.09s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 stop -v=7 --alsologtostderr
E0307 19:29:00.790906    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:30:44.683097    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-501000 stop -v=7 --alsologtostderr: (3m21.986682125s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr: exit status 7 (69.687166ms)

                                                
                                                
-- stdout --
	ha-501000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-501000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-501000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-501000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:31:36.758587    3617 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:31:36.758784    3617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:31:36.758788    3617 out.go:304] Setting ErrFile to fd 2...
	I0307 19:31:36.758791    3617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:31:36.758962    3617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:31:36.759138    3617 out.go:298] Setting JSON to false
	I0307 19:31:36.759155    3617 mustload.go:65] Loading cluster: ha-501000
	I0307 19:31:36.759192    3617 notify.go:220] Checking for updates...
	I0307 19:31:36.759439    3617 config.go:182] Loaded profile config "ha-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:31:36.759445    3617 status.go:255] checking status of ha-501000 ...
	I0307 19:31:36.759715    3617 status.go:330] ha-501000 host status = "Stopped" (err=<nil>)
	I0307 19:31:36.759720    3617 status.go:343] host is not running, skipping remaining checks
	I0307 19:31:36.759724    3617 status.go:257] ha-501000 status: &{Name:ha-501000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 19:31:36.759738    3617 status.go:255] checking status of ha-501000-m02 ...
	I0307 19:31:36.759872    3617 status.go:330] ha-501000-m02 host status = "Stopped" (err=<nil>)
	I0307 19:31:36.759876    3617 status.go:343] host is not running, skipping remaining checks
	I0307 19:31:36.759881    3617 status.go:257] ha-501000-m02 status: &{Name:ha-501000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 19:31:36.759887    3617 status.go:255] checking status of ha-501000-m03 ...
	I0307 19:31:36.760020    3617 status.go:330] ha-501000-m03 host status = "Stopped" (err=<nil>)
	I0307 19:31:36.760024    3617 status.go:343] host is not running, skipping remaining checks
	I0307 19:31:36.760027    3617 status.go:257] ha-501000-m03 status: &{Name:ha-501000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 19:31:36.760031    3617 status.go:255] checking status of ha-501000-m04 ...
	I0307 19:31:36.760155    3617 status.go:330] ha-501000-m04 host status = "Stopped" (err=<nil>)
	I0307 19:31:36.760159    3617 status.go:343] host is not running, skipping remaining checks
	I0307 19:31:36.760162    3617 status.go:257] ha-501000-m04 status: &{Name:ha-501000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr": ha-501000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-501000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-501000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-501000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr": ha-501000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-501000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-501000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-501000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr": ha-501000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-501000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-501000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-501000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000: exit status 7 (34.182917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/StopCluster (202.09s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-501000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-501000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.183114875s)

                                                
                                                
-- stdout --
	* [ha-501000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-501000" primary control-plane node in "ha-501000" cluster
	* Restarting existing qemu2 VM for "ha-501000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-501000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:31:36.825368    3621 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:31:36.825519    3621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:31:36.825523    3621 out.go:304] Setting ErrFile to fd 2...
	I0307 19:31:36.825525    3621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:31:36.825672    3621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:31:36.826817    3621 out.go:298] Setting JSON to false
	I0307 19:31:36.842972    3621 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3668,"bootTime":1709865028,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:31:36.843037    3621 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:31:36.848402    3621 out.go:177] * [ha-501000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:31:36.855317    3621 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:31:36.859325    3621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:31:36.855409    3621 notify.go:220] Checking for updates...
	I0307 19:31:36.863289    3621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:31:36.866370    3621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:31:36.869394    3621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:31:36.872370    3621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:31:36.875696    3621 config.go:182] Loaded profile config "ha-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:31:36.875961    3621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:31:36.880344    3621 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:31:36.887297    3621 start.go:297] selected driver: qemu2
	I0307 19:31:36.887302    3621 start.go:901] validating driver "qemu2" against &{Name:ha-501000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-501000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:31:36.887372    3621 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:31:36.889584    3621 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:31:36.889635    3621 cni.go:84] Creating CNI manager for ""
	I0307 19:31:36.889641    3621 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0307 19:31:36.889691    3621 start.go:340] cluster config:
	{Name:ha-501000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-501000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:31:36.894108    3621 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:31:36.902299    3621 out.go:177] * Starting "ha-501000" primary control-plane node in "ha-501000" cluster
	I0307 19:31:36.906346    3621 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:31:36.906360    3621 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:31:36.906372    3621 cache.go:56] Caching tarball of preloaded images
	I0307 19:31:36.906435    3621 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:31:36.906441    3621 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:31:36.906542    3621 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/ha-501000/config.json ...
	I0307 19:31:36.907025    3621 start.go:360] acquireMachinesLock for ha-501000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:31:36.907061    3621 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "ha-501000"
	I0307 19:31:36.907070    3621 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:31:36.907076    3621 fix.go:54] fixHost starting: 
	I0307 19:31:36.907204    3621 fix.go:112] recreateIfNeeded on ha-501000: state=Stopped err=<nil>
	W0307 19:31:36.907214    3621 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:31:36.911328    3621 out.go:177] * Restarting existing qemu2 VM for "ha-501000" ...
	I0307 19:31:36.919312    3621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:11:d1:33:a0:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/disk.qcow2
	I0307 19:31:36.921528    3621 main.go:141] libmachine: STDOUT: 
	I0307 19:31:36.921552    3621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:31:36.921582    3621 fix.go:56] duration metric: took 14.504333ms for fixHost
	I0307 19:31:36.921587    3621 start.go:83] releasing machines lock for "ha-501000", held for 14.521833ms
	W0307 19:31:36.921593    3621 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:31:36.921636    3621 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:31:36.921641    3621 start.go:728] Will try again in 5 seconds ...
	I0307 19:31:41.923677    3621 start.go:360] acquireMachinesLock for ha-501000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:31:41.924144    3621 start.go:364] duration metric: took 347µs to acquireMachinesLock for "ha-501000"
	I0307 19:31:41.924291    3621 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:31:41.924312    3621 fix.go:54] fixHost starting: 
	I0307 19:31:41.925031    3621 fix.go:112] recreateIfNeeded on ha-501000: state=Stopped err=<nil>
	W0307 19:31:41.925059    3621 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:31:41.933592    3621 out.go:177] * Restarting existing qemu2 VM for "ha-501000" ...
	I0307 19:31:41.936729    3621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:11:d1:33:a0:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/ha-501000/disk.qcow2
	I0307 19:31:41.947108    3621 main.go:141] libmachine: STDOUT: 
	I0307 19:31:41.947176    3621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:31:41.947264    3621 fix.go:56] duration metric: took 22.954209ms for fixHost
	I0307 19:31:41.947284    3621 start.go:83] releasing machines lock for "ha-501000", held for 23.118209ms
	W0307 19:31:41.947423    3621 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-501000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-501000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:31:41.954540    3621 out.go:177] 
	W0307 19:31:41.958539    3621 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:31:41.958565    3621 out.go:239] * 
	* 
	W0307 19:31:41.961372    3621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:31:41.966533    3621 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-501000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000: exit status 7 (70.447625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-501000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-501000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-501000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-501000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountI
P\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000: exit status 7 (31.871292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-501000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-501000 --control-plane -v=7 --alsologtostderr: exit status 83 (44.689666ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-501000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-501000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:31:42.188671    3637 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:31:42.189047    3637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:31:42.189051    3637 out.go:304] Setting ErrFile to fd 2...
	I0307 19:31:42.189058    3637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:31:42.189228    3637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:31:42.189464    3637 mustload.go:65] Loading cluster: ha-501000
	I0307 19:31:42.189675    3637 config.go:182] Loaded profile config "ha-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0307 19:31:42.190038    3637 out.go:239] ! The control-plane node ha-501000 host is not running (will try others): state=Stopped
	! The control-plane node ha-501000 host is not running (will try others): state=Stopped
	W0307 19:31:42.190139    3637 out.go:239] ! The control-plane node ha-501000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-501000-m02 host is not running (will try others): state=Stopped
	I0307 19:31:42.194327    3637 out.go:177] * The control-plane node ha-501000-m03 host is not running: state=Stopped
	I0307 19:31:42.198364    3637 out.go:177]   To start a cluster, run: "minikube start -p ha-501000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-501000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-501000 -n ha-501000: exit status 7 (31.92625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-446000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-446000 --driver=qemu2 : exit status 80 (10.024615041s)

                                                
                                                
-- stdout --
	* [image-446000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-446000" primary control-plane node in "image-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-446000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-446000 -n image-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-446000 -n image-446000: exit status 7 (69.233542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-582000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-582000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.749189625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0146ce58-1a12-433d-b657-cf903d68e50e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-582000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"75abea1f-c6f1-44ac-b73d-52162c538009","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18333"}}
	{"specversion":"1.0","id":"f4a5e964-65c1-436d-8d3a-bd9cd77d8f72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig"}}
	{"specversion":"1.0","id":"479969ca-b5a7-4862-9c90-92e1585f2744","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"715f27d6-6280-49b5-a0ca-2190baed10bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5da93553-d577-44bf-b9a3-63d8b9d2e39f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube"}}
	{"specversion":"1.0","id":"dce99ccb-1be3-42d8-9d62-1218e9774bea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"82ddb034-ea0c-4550-a70d-071e7e774e5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5a823c72-d2c5-4b64-92b8-5ca97027277b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"d26a41b8-93ba-4df7-b2ed-1082deabbca6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-582000\" primary control-plane node in \"json-output-582000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf2aa052-635d-4a3d-b103-4b4708a73fa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"dd1529ca-fc90-4a2c-be5f-4318ea84ac49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-582000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2e36d5b-5a78-4343-bd9b-1b00197707c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"02ffe388-967b-4c45-99ca-a67022b4ca7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c9df6329-ccce-446e-8505-995544adb03a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-582000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"77033ba9-c850-4116-b974-81707c3ac48d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"006dc7a5-85f6-4009-8e4d-488c9df2cb79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-582000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-582000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-582000 --output=json --user=testUser: exit status 83 (79.49025ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"668cf800-0805-41a0-86fa-61f1129ebf2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-582000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"b6ae699a-52f6-447e-b675-63c1d64f1f5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-582000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-582000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-582000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-582000 --output=json --user=testUser: exit status 83 (49.237875ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-582000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-582000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-582000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-582000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.31s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-954000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-954000 --driver=qemu2 : exit status 80 (9.869459917s)

                                                
                                                
-- stdout --
	* [first-954000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-954000" primary control-plane node in "first-954000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-954000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-954000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-954000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-07 19:32:16.441625 -0800 PST m=+2197.756083417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-956000 -n second-956000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-956000 -n second-956000: exit status 85 (81.462209ms)

                                                
                                                
-- stdout --
	* Profile "second-956000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-956000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-956000" host is not running, skipping log retrieval (state="* Profile \"second-956000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-956000\"")
helpers_test.go:175: Cleaning up "second-956000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-956000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-07 19:32:16.751395 -0800 PST m=+2198.065866042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-954000 -n first-954000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-954000 -n first-954000: exit status 7 (31.305958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-954000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-954000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-954000
--- FAIL: TestMinikubeProfile (10.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (11.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-321000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-321000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.981498375s)

                                                
                                                
-- stdout --
	* [mount-start-1-321000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-321000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-321000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-321000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-321000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-321000 -n mount-start-1-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-321000 -n mount-start-1-321000: exit status 7 (68.795167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (11.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-407000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0307 19:32:37.716486    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-407000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.7740595s)

                                                
                                                
-- stdout --
	* [multinode-407000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-407000" primary control-plane node in "multinode-407000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-407000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:32:28.309736    3819 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:32:28.309851    3819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:32:28.309854    3819 out.go:304] Setting ErrFile to fd 2...
	I0307 19:32:28.309856    3819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:32:28.309989    3819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:32:28.311096    3819 out.go:298] Setting JSON to false
	I0307 19:32:28.327140    3819 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3720,"bootTime":1709865028,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:32:28.327201    3819 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:32:28.333359    3819 out.go:177] * [multinode-407000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:32:28.340271    3819 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:32:28.340334    3819 notify.go:220] Checking for updates...
	I0307 19:32:28.347323    3819 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:32:28.350264    3819 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:32:28.353299    3819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:32:28.356305    3819 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:32:28.359237    3819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:32:28.362501    3819 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:32:28.366320    3819 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:32:28.373285    3819 start.go:297] selected driver: qemu2
	I0307 19:32:28.373292    3819 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:32:28.373299    3819 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:32:28.375556    3819 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:32:28.378267    3819 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:32:28.381363    3819 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:32:28.381396    3819 cni.go:84] Creating CNI manager for ""
	I0307 19:32:28.381401    3819 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0307 19:32:28.381410    3819 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 19:32:28.381435    3819 start.go:340] cluster config:
	{Name:multinode-407000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:32:28.386205    3819 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:32:28.393319    3819 out.go:177] * Starting "multinode-407000" primary control-plane node in "multinode-407000" cluster
	I0307 19:32:28.397289    3819 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:32:28.397306    3819 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:32:28.397319    3819 cache.go:56] Caching tarball of preloaded images
	I0307 19:32:28.397384    3819 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:32:28.397391    3819 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:32:28.397627    3819 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/multinode-407000/config.json ...
	I0307 19:32:28.397641    3819 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/multinode-407000/config.json: {Name:mk87bb654906dedabab760135b02f983d0e963bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:32:28.397871    3819 start.go:360] acquireMachinesLock for multinode-407000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:32:28.397905    3819 start.go:364] duration metric: took 28µs to acquireMachinesLock for "multinode-407000"
	I0307 19:32:28.397916    3819 start.go:93] Provisioning new machine with config: &{Name:multinode-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:32:28.397951    3819 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:32:28.402159    3819 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:32:28.420318    3819 start.go:159] libmachine.API.Create for "multinode-407000" (driver="qemu2")
	I0307 19:32:28.420347    3819 client.go:168] LocalClient.Create starting
	I0307 19:32:28.420411    3819 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:32:28.420441    3819 main.go:141] libmachine: Decoding PEM data...
	I0307 19:32:28.420449    3819 main.go:141] libmachine: Parsing certificate...
	I0307 19:32:28.420499    3819 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:32:28.420522    3819 main.go:141] libmachine: Decoding PEM data...
	I0307 19:32:28.420530    3819 main.go:141] libmachine: Parsing certificate...
	I0307 19:32:28.420878    3819 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:32:28.556953    3819 main.go:141] libmachine: Creating SSH key...
	I0307 19:32:28.652038    3819 main.go:141] libmachine: Creating Disk image...
	I0307 19:32:28.652043    3819 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:32:28.652201    3819 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2
	I0307 19:32:28.665087    3819 main.go:141] libmachine: STDOUT: 
	I0307 19:32:28.665102    3819 main.go:141] libmachine: STDERR: 
	I0307 19:32:28.665147    3819 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2 +20000M
	I0307 19:32:28.676436    3819 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:32:28.676462    3819 main.go:141] libmachine: STDERR: 
	I0307 19:32:28.676482    3819 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2
	I0307 19:32:28.676487    3819 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:32:28.676516    3819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:26:d2:4f:cb:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2
	I0307 19:32:28.678220    3819 main.go:141] libmachine: STDOUT: 
	I0307 19:32:28.678234    3819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:32:28.678257    3819 client.go:171] duration metric: took 257.91425ms to LocalClient.Create
	I0307 19:32:30.680493    3819 start.go:128] duration metric: took 2.2826075s to createHost
	I0307 19:32:30.680550    3819 start.go:83] releasing machines lock for "multinode-407000", held for 2.282729583s
	W0307 19:32:30.680598    3819 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:32:30.691446    3819 out.go:177] * Deleting "multinode-407000" in qemu2 ...
	W0307 19:32:30.720069    3819 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:32:30.720096    3819 start.go:728] Will try again in 5 seconds ...
	I0307 19:32:35.722060    3819 start.go:360] acquireMachinesLock for multinode-407000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:32:35.722481    3819 start.go:364] duration metric: took 336.417µs to acquireMachinesLock for "multinode-407000"
	I0307 19:32:35.722614    3819 start.go:93] Provisioning new machine with config: &{Name:multinode-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:32:35.722817    3819 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:32:35.728026    3819 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:32:35.777528    3819 start.go:159] libmachine.API.Create for "multinode-407000" (driver="qemu2")
	I0307 19:32:35.777576    3819 client.go:168] LocalClient.Create starting
	I0307 19:32:35.777698    3819 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:32:35.777782    3819 main.go:141] libmachine: Decoding PEM data...
	I0307 19:32:35.777801    3819 main.go:141] libmachine: Parsing certificate...
	I0307 19:32:35.777869    3819 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:32:35.777917    3819 main.go:141] libmachine: Decoding PEM data...
	I0307 19:32:35.777934    3819 main.go:141] libmachine: Parsing certificate...
	I0307 19:32:35.778557    3819 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:32:35.944201    3819 main.go:141] libmachine: Creating SSH key...
	I0307 19:32:35.977438    3819 main.go:141] libmachine: Creating Disk image...
	I0307 19:32:35.977442    3819 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:32:35.977610    3819 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2
	I0307 19:32:35.990100    3819 main.go:141] libmachine: STDOUT: 
	I0307 19:32:35.990119    3819 main.go:141] libmachine: STDERR: 
	I0307 19:32:35.990174    3819 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2 +20000M
	I0307 19:32:36.001038    3819 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:32:36.001054    3819 main.go:141] libmachine: STDERR: 
	I0307 19:32:36.001066    3819 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2
	I0307 19:32:36.001072    3819 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:32:36.001116    3819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:c4:3f:31:55:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2
	I0307 19:32:36.002894    3819 main.go:141] libmachine: STDOUT: 
	I0307 19:32:36.002908    3819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:32:36.002921    3819 client.go:171] duration metric: took 225.347125ms to LocalClient.Create
	I0307 19:32:38.005013    3819 start.go:128] duration metric: took 2.282259917s to createHost
	I0307 19:32:38.005163    3819 start.go:83] releasing machines lock for "multinode-407000", held for 2.282671709s
	W0307 19:32:38.005552    3819 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-407000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-407000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:32:38.020199    3819 out.go:177] 
	W0307 19:32:38.025341    3819 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:32:38.025383    3819 out.go:239] * 
	* 
	W0307 19:32:38.028112    3819 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:32:38.038134    3819 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-407000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (71.631542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (90.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (123.779375ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-407000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- rollout status deployment/busybox: exit status 1 (58.09525ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.25875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.272584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.955875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.685375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.460584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.593333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.960084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.773917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.911041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.362416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.145333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.943ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.078542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.0795ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (31.684375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (90.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-407000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.669125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (31.74775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-407000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-407000 -v 3 --alsologtostderr: exit status 83 (45.336959ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-407000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-407000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:08.667059    3928 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:08.667211    3928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:08.667214    3928 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:08.667216    3928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:08.667351    3928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:08.667593    3928 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:34:08.667775    3928 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:08.673278    3928 out.go:177] * The control-plane node multinode-407000 host is not running: state=Stopped
	I0307 19:34:08.678202    3928 out.go:177]   To start a cluster, run: "minikube start -p multinode-407000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-407000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (31.433292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-407000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-407000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (94.423167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-407000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-407000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-407000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (31.631ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-407000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-407000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-407000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-407000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (31.681625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status --output json --alsologtostderr: exit status 7 (31.580875ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-407000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:08.974437    3941 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:08.974575    3941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:08.974578    3941 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:08.974580    3941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:08.974703    3941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:08.974824    3941 out.go:298] Setting JSON to true
	I0307 19:34:08.974836    3941 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:34:08.974892    3941 notify.go:220] Checking for updates...
	I0307 19:34:08.975044    3941 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:08.975051    3941 status.go:255] checking status of multinode-407000 ...
	I0307 19:34:08.975248    3941 status.go:330] multinode-407000 host status = "Stopped" (err=<nil>)
	I0307 19:34:08.975252    3941 status.go:343] host is not running, skipping remaining checks
	I0307 19:34:08.975254    3941 status.go:257] multinode-407000 status: &{Name:multinode-407000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-407000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (31.647625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 node stop m03: exit status 85 (48.0165ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-407000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status: exit status 7 (32.890333ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status --alsologtostderr: exit status 7 (31.458542ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:09.119166    3949 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:09.119301    3949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:09.119304    3949 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:09.119306    3949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:09.119421    3949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:09.119543    3949 out.go:298] Setting JSON to false
	I0307 19:34:09.119555    3949 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:34:09.119619    3949 notify.go:220] Checking for updates...
	I0307 19:34:09.119789    3949 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:09.119794    3949 status.go:255] checking status of multinode-407000 ...
	I0307 19:34:09.120005    3949 status.go:330] multinode-407000 host status = "Stopped" (err=<nil>)
	I0307 19:34:09.120008    3949 status.go:343] host is not running, skipping remaining checks
	I0307 19:34:09.120010    3949 status.go:257] multinode-407000 status: &{Name:multinode-407000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-407000 status --alsologtostderr": multinode-407000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (32.134875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (47.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.10475ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:09.183437    3953 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:09.183651    3953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:09.183654    3953 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:09.183656    3953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:09.183793    3953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:09.184036    3953 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:34:09.184216    3953 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:09.188548    3953 out.go:177] 
	W0307 19:34:09.191474    3953 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0307 19:34:09.191479    3953 out.go:239] * 
	* 
	W0307 19:34:09.193086    3953 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:34:09.196418    3953 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0307 19:34:09.183437    3953 out.go:291] Setting OutFile to fd 1 ...
I0307 19:34:09.183651    3953 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:34:09.183654    3953 out.go:304] Setting ErrFile to fd 2...
I0307 19:34:09.183656    3953 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:34:09.183793    3953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
I0307 19:34:09.184036    3953 mustload.go:65] Loading cluster: multinode-407000
I0307 19:34:09.184216    3953 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 19:34:09.188548    3953 out.go:177] 
W0307 19:34:09.191474    3953 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0307 19:34:09.191479    3953 out.go:239] * 
* 
W0307 19:34:09.193086    3953 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0307 19:34:09.196418    3953 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-407000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr: exit status 7 (31.354084ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:09.230205    3955 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:09.230338    3955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:09.230342    3955 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:09.230344    3955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:09.230466    3955 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:09.230580    3955 out.go:298] Setting JSON to false
	I0307 19:34:09.230591    3955 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:34:09.230653    3955 notify.go:220] Checking for updates...
	I0307 19:34:09.230795    3955 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:09.230800    3955 status.go:255] checking status of multinode-407000 ...
	I0307 19:34:09.231011    3955 status.go:330] multinode-407000 host status = "Stopped" (err=<nil>)
	I0307 19:34:09.231015    3955 status.go:343] host is not running, skipping remaining checks
	I0307 19:34:09.231017    3955 status.go:257] multinode-407000 status: &{Name:multinode-407000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr: exit status 7 (75.408125ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:10.697851    3957 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:10.698026    3957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:10.698030    3957 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:10.698034    3957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:10.698223    3957 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:10.698380    3957 out.go:298] Setting JSON to false
	I0307 19:34:10.698395    3957 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:34:10.698435    3957 notify.go:220] Checking for updates...
	I0307 19:34:10.698638    3957 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:10.698649    3957 status.go:255] checking status of multinode-407000 ...
	I0307 19:34:10.698924    3957 status.go:330] multinode-407000 host status = "Stopped" (err=<nil>)
	I0307 19:34:10.698929    3957 status.go:343] host is not running, skipping remaining checks
	I0307 19:34:10.698932    3957 status.go:257] multinode-407000 status: &{Name:multinode-407000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr: exit status 7 (73.723709ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:12.530016    3959 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:12.530185    3959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:12.530189    3959 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:12.530192    3959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:12.530361    3959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:12.530517    3959 out.go:298] Setting JSON to false
	I0307 19:34:12.530533    3959 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:34:12.530563    3959 notify.go:220] Checking for updates...
	I0307 19:34:12.530799    3959 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:12.530806    3959 status.go:255] checking status of multinode-407000 ...
	I0307 19:34:12.531074    3959 status.go:330] multinode-407000 host status = "Stopped" (err=<nil>)
	I0307 19:34:12.531079    3959 status.go:343] host is not running, skipping remaining checks
	I0307 19:34:12.531082    3959 status.go:257] multinode-407000 status: &{Name:multinode-407000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr: exit status 7 (76.419416ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:13.919155    3961 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:13.919378    3961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:13.919383    3961 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:13.919386    3961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:13.919556    3961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:13.919720    3961 out.go:298] Setting JSON to false
	I0307 19:34:13.919738    3961 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:34:13.919800    3961 notify.go:220] Checking for updates...
	I0307 19:34:13.920010    3961 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:13.920022    3961 status.go:255] checking status of multinode-407000 ...
	I0307 19:34:13.920304    3961 status.go:330] multinode-407000 host status = "Stopped" (err=<nil>)
	I0307 19:34:13.920309    3961 status.go:343] host is not running, skipping remaining checks
	I0307 19:34:13.920312    3961 status.go:257] multinode-407000 status: &{Name:multinode-407000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr: exit status 7 (72.702167ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:17.185447    3963 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:17.185648    3963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:17.185653    3963 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:17.185656    3963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:17.185809    3963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:17.185962    3963 out.go:298] Setting JSON to false
	I0307 19:34:17.185978    3963 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:34:17.186012    3963 notify.go:220] Checking for updates...
	I0307 19:34:17.186235    3963 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:17.186242    3963 status.go:255] checking status of multinode-407000 ...
	I0307 19:34:17.186493    3963 status.go:330] multinode-407000 host status = "Stopped" (err=<nil>)
	I0307 19:34:17.186498    3963 status.go:343] host is not running, skipping remaining checks
	I0307 19:34:17.186500    3963 status.go:257] multinode-407000 status: &{Name:multinode-407000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr: exit status 7 (72.934208ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:19.895231    3967 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:19.895424    3967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:19.895432    3967 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:19.895435    3967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:19.895598    3967 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:19.895753    3967 out.go:298] Setting JSON to false
	I0307 19:34:19.895771    3967 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:34:19.895796    3967 notify.go:220] Checking for updates...
	I0307 19:34:19.896019    3967 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:19.896026    3967 status.go:255] checking status of multinode-407000 ...
	I0307 19:34:19.896286    3967 status.go:330] multinode-407000 host status = "Stopped" (err=<nil>)
	I0307 19:34:19.896291    3967 status.go:343] host is not running, skipping remaining checks
	I0307 19:34:19.896294    3967 status.go:257] multinode-407000 status: &{Name:multinode-407000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr: exit status 7 (74.309167ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:23.965702    3969 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:23.965882    3969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:23.965886    3969 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:23.965889    3969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:23.966068    3969 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:23.966238    3969 out.go:298] Setting JSON to false
	I0307 19:34:23.966252    3969 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:34:23.966287    3969 notify.go:220] Checking for updates...
	I0307 19:34:23.966499    3969 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:23.966506    3969 status.go:255] checking status of multinode-407000 ...
	I0307 19:34:23.966771    3969 status.go:330] multinode-407000 host status = "Stopped" (err=<nil>)
	I0307 19:34:23.966776    3969 status.go:343] host is not running, skipping remaining checks
	I0307 19:34:23.966778    3969 status.go:257] multinode-407000 status: &{Name:multinode-407000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr: exit status 7 (74.414708ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:37.857682    3979 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:37.857881    3979 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:37.857885    3979 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:37.857888    3979 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:37.858061    3979 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:37.858225    3979 out.go:298] Setting JSON to false
	I0307 19:34:37.858241    3979 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:34:37.858273    3979 notify.go:220] Checking for updates...
	I0307 19:34:37.858507    3979 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:37.858514    3979 status.go:255] checking status of multinode-407000 ...
	I0307 19:34:37.858794    3979 status.go:330] multinode-407000 host status = "Stopped" (err=<nil>)
	I0307 19:34:37.858799    3979 status.go:343] host is not running, skipping remaining checks
	I0307 19:34:37.858802    3979 status.go:257] multinode-407000 status: &{Name:multinode-407000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr: exit status 7 (72.919292ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:56.246389    3993 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:56.246598    3993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:56.246602    3993 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:56.246605    3993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:56.246775    3993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:56.246934    3993 out.go:298] Setting JSON to false
	I0307 19:34:56.246949    3993 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:34:56.246989    3993 notify.go:220] Checking for updates...
	I0307 19:34:56.247204    3993 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:56.247211    3993 status.go:255] checking status of multinode-407000 ...
	I0307 19:34:56.247508    3993 status.go:330] multinode-407000 host status = "Stopped" (err=<nil>)
	I0307 19:34:56.247515    3993 status.go:343] host is not running, skipping remaining checks
	I0307 19:34:56.247518    3993 status.go:257] multinode-407000 status: &{Name:multinode-407000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-407000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (34.628833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (47.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-407000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-407000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-407000: (1.954431333s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-407000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-407000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.231681s)

                                                
                                                
-- stdout --
	* [multinode-407000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-407000" primary control-plane node in "multinode-407000" cluster
	* Restarting existing qemu2 VM for "multinode-407000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-407000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:34:58.336236    4013 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:34:58.336417    4013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:58.336428    4013 out.go:304] Setting ErrFile to fd 2...
	I0307 19:34:58.336433    4013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:34:58.336629    4013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:34:58.337948    4013 out.go:298] Setting JSON to false
	I0307 19:34:58.356903    4013 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3870,"bootTime":1709865028,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:34:58.356990    4013 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:34:58.362005    4013 out.go:177] * [multinode-407000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:34:58.369889    4013 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:34:58.373923    4013 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:34:58.369950    4013 notify.go:220] Checking for updates...
	I0307 19:34:58.379878    4013 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:34:58.382911    4013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:34:58.384328    4013 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:34:58.387878    4013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:34:58.391232    4013 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:34:58.391296    4013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:34:58.395739    4013 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:34:58.402885    4013 start.go:297] selected driver: qemu2
	I0307 19:34:58.402891    4013 start.go:901] validating driver "qemu2" against &{Name:multinode-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:34:58.402942    4013 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:34:58.405332    4013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:34:58.405385    4013 cni.go:84] Creating CNI manager for ""
	I0307 19:34:58.405392    4013 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0307 19:34:58.405454    4013 start.go:340] cluster config:
	{Name:multinode-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-407000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:34:58.410064    4013 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:34:58.416853    4013 out.go:177] * Starting "multinode-407000" primary control-plane node in "multinode-407000" cluster
	I0307 19:34:58.420895    4013 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:34:58.420910    4013 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:34:58.420919    4013 cache.go:56] Caching tarball of preloaded images
	I0307 19:34:58.420986    4013 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:34:58.420992    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:34:58.421052    4013 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/multinode-407000/config.json ...
	I0307 19:34:58.421529    4013 start.go:360] acquireMachinesLock for multinode-407000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:34:58.421564    4013 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "multinode-407000"
	I0307 19:34:58.421573    4013 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:34:58.421579    4013 fix.go:54] fixHost starting: 
	I0307 19:34:58.421713    4013 fix.go:112] recreateIfNeeded on multinode-407000: state=Stopped err=<nil>
	W0307 19:34:58.421722    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:34:58.425841    4013 out.go:177] * Restarting existing qemu2 VM for "multinode-407000" ...
	I0307 19:34:58.433875    4013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:c4:3f:31:55:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2
	I0307 19:34:58.436178    4013 main.go:141] libmachine: STDOUT: 
	I0307 19:34:58.436202    4013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:34:58.436232    4013 fix.go:56] duration metric: took 14.652ms for fixHost
	I0307 19:34:58.436238    4013 start.go:83] releasing machines lock for "multinode-407000", held for 14.67025ms
	W0307 19:34:58.436244    4013 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:34:58.436283    4013 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:34:58.436288    4013 start.go:728] Will try again in 5 seconds ...
	I0307 19:35:03.436856    4013 start.go:360] acquireMachinesLock for multinode-407000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:35:03.437231    4013 start.go:364] duration metric: took 289.542µs to acquireMachinesLock for "multinode-407000"
	I0307 19:35:03.437398    4013 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:35:03.437426    4013 fix.go:54] fixHost starting: 
	I0307 19:35:03.438222    4013 fix.go:112] recreateIfNeeded on multinode-407000: state=Stopped err=<nil>
	W0307 19:35:03.438248    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:35:03.448807    4013 out.go:177] * Restarting existing qemu2 VM for "multinode-407000" ...
	I0307 19:35:03.452931    4013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:c4:3f:31:55:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2
	I0307 19:35:03.463470    4013 main.go:141] libmachine: STDOUT: 
	I0307 19:35:03.463562    4013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:35:03.463646    4013 fix.go:56] duration metric: took 26.22425ms for fixHost
	I0307 19:35:03.463666    4013 start.go:83] releasing machines lock for "multinode-407000", held for 26.41575ms
	W0307 19:35:03.463875    4013 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-407000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-407000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:35:03.471774    4013 out.go:177] 
	W0307 19:35:03.475868    4013 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:35:03.475936    4013 out.go:239] * 
	* 
	W0307 19:35:03.478545    4013 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:35:03.486770    4013 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-407000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-407000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (34.337166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 node delete m03: exit status 83 (41.016375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-407000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-407000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-407000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status --alsologtostderr: exit status 7 (32.02375ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:35:03.677432    4027 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:35:03.677602    4027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:35:03.677606    4027 out.go:304] Setting ErrFile to fd 2...
	I0307 19:35:03.677608    4027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:35:03.677722    4027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:35:03.677847    4027 out.go:298] Setting JSON to false
	I0307 19:35:03.677859    4027 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:35:03.677911    4027 notify.go:220] Checking for updates...
	I0307 19:35:03.678057    4027 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:35:03.678063    4027 status.go:255] checking status of multinode-407000 ...
	I0307 19:35:03.678260    4027 status.go:330] multinode-407000 host status = "Stopped" (err=<nil>)
	I0307 19:35:03.678264    4027 status.go:343] host is not running, skipping remaining checks
	I0307 19:35:03.678266    4027 status.go:257] multinode-407000 status: &{Name:multinode-407000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-407000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (31.298833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-407000 stop: (3.175041875s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status: exit status 7 (64.363708ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-407000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-407000 status --alsologtostderr: exit status 7 (33.425459ms)

                                                
                                                
-- stdout --
	multinode-407000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:35:06.982117    4051 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:35:06.982244    4051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:35:06.982247    4051 out.go:304] Setting ErrFile to fd 2...
	I0307 19:35:06.982250    4051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:35:06.982371    4051 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:35:06.982503    4051 out.go:298] Setting JSON to false
	I0307 19:35:06.982516    4051 mustload.go:65] Loading cluster: multinode-407000
	I0307 19:35:06.982572    4051 notify.go:220] Checking for updates...
	I0307 19:35:06.982702    4051 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:35:06.982707    4051 status.go:255] checking status of multinode-407000 ...
	I0307 19:35:06.982918    4051 status.go:330] multinode-407000 host status = "Stopped" (err=<nil>)
	I0307 19:35:06.982921    4051 status.go:343] host is not running, skipping remaining checks
	I0307 19:35:06.982924    4051 status.go:257] multinode-407000 status: &{Name:multinode-407000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-407000 status --alsologtostderr": multinode-407000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-407000 status --alsologtostderr": multinode-407000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (31.817541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-407000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-407000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.186546167s)

                                                
                                                
-- stdout --
	* [multinode-407000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-407000" primary control-plane node in "multinode-407000" cluster
	* Restarting existing qemu2 VM for "multinode-407000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-407000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:35:07.045304    4055 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:35:07.045431    4055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:35:07.045434    4055 out.go:304] Setting ErrFile to fd 2...
	I0307 19:35:07.045436    4055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:35:07.045557    4055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:35:07.046577    4055 out.go:298] Setting JSON to false
	I0307 19:35:07.062523    4055 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3879,"bootTime":1709865028,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:35:07.062603    4055 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:35:07.067325    4055 out.go:177] * [multinode-407000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:35:07.073236    4055 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:35:07.077249    4055 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:35:07.073279    4055 notify.go:220] Checking for updates...
	I0307 19:35:07.083211    4055 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:35:07.086221    4055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:35:07.087697    4055 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:35:07.091220    4055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:35:07.094512    4055 config.go:182] Loaded profile config "multinode-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:35:07.094761    4055 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:35:07.099055    4055 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:35:07.106217    4055 start.go:297] selected driver: qemu2
	I0307 19:35:07.106225    4055 start.go:901] validating driver "qemu2" against &{Name:multinode-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:35:07.106293    4055 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:35:07.108533    4055 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:35:07.108582    4055 cni.go:84] Creating CNI manager for ""
	I0307 19:35:07.108587    4055 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0307 19:35:07.108630    4055 start.go:340] cluster config:
	{Name:multinode-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-407000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:35:07.112891    4055 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:35:07.120208    4055 out.go:177] * Starting "multinode-407000" primary control-plane node in "multinode-407000" cluster
	I0307 19:35:07.124255    4055 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:35:07.124269    4055 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:35:07.124281    4055 cache.go:56] Caching tarball of preloaded images
	I0307 19:35:07.124337    4055 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:35:07.124342    4055 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:35:07.124410    4055 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/multinode-407000/config.json ...
	I0307 19:35:07.124861    4055 start.go:360] acquireMachinesLock for multinode-407000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:35:07.124887    4055 start.go:364] duration metric: took 18.625µs to acquireMachinesLock for "multinode-407000"
	I0307 19:35:07.124894    4055 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:35:07.124898    4055 fix.go:54] fixHost starting: 
	I0307 19:35:07.125009    4055 fix.go:112] recreateIfNeeded on multinode-407000: state=Stopped err=<nil>
	W0307 19:35:07.125018    4055 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:35:07.129166    4055 out.go:177] * Restarting existing qemu2 VM for "multinode-407000" ...
	I0307 19:35:07.137218    4055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:c4:3f:31:55:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2
	I0307 19:35:07.139222    4055 main.go:141] libmachine: STDOUT: 
	I0307 19:35:07.139243    4055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:35:07.139272    4055 fix.go:56] duration metric: took 14.372458ms for fixHost
	I0307 19:35:07.139276    4055 start.go:83] releasing machines lock for "multinode-407000", held for 14.38575ms
	W0307 19:35:07.139282    4055 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:35:07.139323    4055 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:35:07.139328    4055 start.go:728] Will try again in 5 seconds ...
	I0307 19:35:12.141307    4055 start.go:360] acquireMachinesLock for multinode-407000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:35:12.141694    4055 start.go:364] duration metric: took 233.5µs to acquireMachinesLock for "multinode-407000"
	I0307 19:35:12.141813    4055 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:35:12.141830    4055 fix.go:54] fixHost starting: 
	I0307 19:35:12.142467    4055 fix.go:112] recreateIfNeeded on multinode-407000: state=Stopped err=<nil>
	W0307 19:35:12.142494    4055 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:35:12.148047    4055 out.go:177] * Restarting existing qemu2 VM for "multinode-407000" ...
	I0307 19:35:12.156064    4055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:c4:3f:31:55:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/multinode-407000/disk.qcow2
	I0307 19:35:12.165807    4055 main.go:141] libmachine: STDOUT: 
	I0307 19:35:12.165891    4055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:35:12.165985    4055 fix.go:56] duration metric: took 24.152084ms for fixHost
	I0307 19:35:12.166006    4055 start.go:83] releasing machines lock for "multinode-407000", held for 24.290291ms
	W0307 19:35:12.166250    4055 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-407000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-407000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:35:12.173878    4055 out.go:177] 
	W0307 19:35:12.178025    4055 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:35:12.178060    4055 out.go:239] * 
	* 
	W0307 19:35:12.180901    4055 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:35:12.187944    4055 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-407000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (69.429541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-407000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-407000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-407000-m01 --driver=qemu2 : exit status 80 (9.843439084s)

                                                
                                                
-- stdout --
	* [multinode-407000-m01] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-407000-m01" primary control-plane node in "multinode-407000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-407000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-407000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-407000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-407000-m02 --driver=qemu2 : exit status 80 (9.966197584s)

                                                
                                                
-- stdout --
	* [multinode-407000-m02] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-407000-m02" primary control-plane node in "multinode-407000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-407000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-407000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-407000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-407000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-407000: exit status 83 (80.096334ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-407000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-407000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-407000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-407000 -n multinode-407000: exit status 7 (32.823583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.06s)

                                                
                                    
x
+
TestPreload (10.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-088000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-088000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.948276709s)

                                                
                                                
-- stdout --
	* [test-preload-088000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-088000" primary control-plane node in "test-preload-088000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-088000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:35:32.504646    4119 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:35:32.504778    4119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:35:32.504782    4119 out.go:304] Setting ErrFile to fd 2...
	I0307 19:35:32.504784    4119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:35:32.504918    4119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:35:32.505980    4119 out.go:298] Setting JSON to false
	I0307 19:35:32.521944    4119 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3904,"bootTime":1709865028,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:35:32.522003    4119 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:35:32.528123    4119 out.go:177] * [test-preload-088000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:35:32.536073    4119 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:35:32.536114    4119 notify.go:220] Checking for updates...
	I0307 19:35:32.541018    4119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:35:32.544052    4119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:35:32.547051    4119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:35:32.550017    4119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:35:32.553045    4119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:35:32.556324    4119 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:35:32.556379    4119 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:35:32.561011    4119 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:35:32.568032    4119 start.go:297] selected driver: qemu2
	I0307 19:35:32.568037    4119 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:35:32.568042    4119 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:35:32.570234    4119 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:35:32.572963    4119 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:35:32.576189    4119 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:35:32.576238    4119 cni.go:84] Creating CNI manager for ""
	I0307 19:35:32.576245    4119 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:35:32.576250    4119 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 19:35:32.576280    4119 start.go:340] cluster config:
	{Name:test-preload-088000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:35:32.580673    4119 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:35:32.586031    4119 out.go:177] * Starting "test-preload-088000" primary control-plane node in "test-preload-088000" cluster
	I0307 19:35:32.590034    4119 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0307 19:35:32.590120    4119 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/test-preload-088000/config.json ...
	I0307 19:35:32.590145    4119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/test-preload-088000/config.json: {Name:mk2434504744b48e0d75a11bf7d61f1de4d9f58b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:35:32.590180    4119 cache.go:107] acquiring lock: {Name:mk24a195480de2a1058c401c7ae7b8cb3e1694e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:35:32.590197    4119 cache.go:107] acquiring lock: {Name:mk63dfe9ad2e3c692e0d0bf36c87bb69404c003b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:35:32.590187    4119 cache.go:107] acquiring lock: {Name:mk0ee0382040ec9eb6c497af4525d033927b392a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:35:32.590243    4119 cache.go:107] acquiring lock: {Name:mk37498ecbd2c6cfaf05f38a41d5428d902de205 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:35:32.590321    4119 cache.go:107] acquiring lock: {Name:mkafe392ed89b37669e05598a9e30afb223895fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:35:32.590384    4119 cache.go:107] acquiring lock: {Name:mke67426e9b96ed5b9f6e1993c76640869e92fff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:35:32.590388    4119 cache.go:107] acquiring lock: {Name:mk28d66ae62de9818db58f9b775fd34f7adb04e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:35:32.590444    4119 cache.go:107] acquiring lock: {Name:mk2658485af434db9d060512bbc9db0588d222c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:35:32.590622    4119 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0307 19:35:32.590677    4119 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0307 19:35:32.590687    4119 start.go:360] acquireMachinesLock for test-preload-088000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:35:32.590712    4119 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:35:32.590755    4119 start.go:364] duration metric: took 52.958µs to acquireMachinesLock for "test-preload-088000"
	I0307 19:35:32.590624    4119 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0307 19:35:32.590766    4119 start.go:93] Provisioning new machine with config: &{Name:test-preload-088000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:35:32.590828    4119 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:35:32.594034    4119 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:35:32.590939    4119 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0307 19:35:32.590947    4119 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0307 19:35:32.590970    4119 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0307 19:35:32.594627    4119 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:35:32.600221    4119 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0307 19:35:32.601757    4119 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0307 19:35:32.603720    4119 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0307 19:35:32.603775    4119 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:35:32.603799    4119 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0307 19:35:32.603843    4119 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0307 19:35:32.603865    4119 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0307 19:35:32.603878    4119 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:35:32.611887    4119 start.go:159] libmachine.API.Create for "test-preload-088000" (driver="qemu2")
	I0307 19:35:32.611902    4119 client.go:168] LocalClient.Create starting
	I0307 19:35:32.611991    4119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:35:32.612021    4119 main.go:141] libmachine: Decoding PEM data...
	I0307 19:35:32.612034    4119 main.go:141] libmachine: Parsing certificate...
	I0307 19:35:32.612076    4119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:35:32.612097    4119 main.go:141] libmachine: Decoding PEM data...
	I0307 19:35:32.612103    4119 main.go:141] libmachine: Parsing certificate...
	I0307 19:35:32.612416    4119 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:35:32.755075    4119 main.go:141] libmachine: Creating SSH key...
	I0307 19:35:32.963428    4119 main.go:141] libmachine: Creating Disk image...
	I0307 19:35:32.963460    4119 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:35:32.963711    4119 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/disk.qcow2
	I0307 19:35:32.976768    4119 main.go:141] libmachine: STDOUT: 
	I0307 19:35:32.976801    4119 main.go:141] libmachine: STDERR: 
	I0307 19:35:32.976858    4119 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/disk.qcow2 +20000M
	I0307 19:35:32.989176    4119 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:35:32.989193    4119 main.go:141] libmachine: STDERR: 
	I0307 19:35:32.989210    4119 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/disk.qcow2
	I0307 19:35:32.989216    4119 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:35:32.989246    4119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:dd:8d:28:19:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/disk.qcow2
	I0307 19:35:32.991173    4119 main.go:141] libmachine: STDOUT: 
	I0307 19:35:32.991187    4119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:35:32.991205    4119 client.go:171] duration metric: took 379.314291ms to LocalClient.Create
	I0307 19:35:34.669761    4119 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0307 19:35:34.789595    4119 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0307 19:35:34.802708    4119 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0307 19:35:34.812531    4119 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0307 19:35:34.815987    4119 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0307 19:35:34.822320    4119 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0307 19:35:34.822416    4119 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0307 19:35:34.827832    4119 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0307 19:35:34.926827    4119 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0307 19:35:34.926882    4119 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.336674375s
	I0307 19:35:34.926952    4119 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0307 19:35:34.991332    4119 start.go:128] duration metric: took 2.400581791s to createHost
	I0307 19:35:34.991381    4119 start.go:83] releasing machines lock for "test-preload-088000", held for 2.400714208s
	W0307 19:35:34.991483    4119 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:35:35.008420    4119 out.go:177] * Deleting "test-preload-088000" in qemu2 ...
	W0307 19:35:35.036003    4119 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:35:35.036038    4119 start.go:728] Will try again in 5 seconds ...
	W0307 19:35:35.530068    4119 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0307 19:35:35.530165    4119 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0307 19:35:36.572184    4119 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0307 19:35:36.572229    4119 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.982023792s
	I0307 19:35:36.572252    4119 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0307 19:35:37.072014    4119 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0307 19:35:37.072067    4119 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.482002208s
	I0307 19:35:37.072105    4119 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0307 19:35:37.374451    4119 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0307 19:35:37.374504    4119 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.784516125s
	I0307 19:35:37.374532    4119 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0307 19:35:38.635654    4119 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0307 19:35:38.635713    4119 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.045768458s
	I0307 19:35:38.635742    4119 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0307 19:35:38.964404    4119 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0307 19:35:38.964464    4119 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.374549542s
	I0307 19:35:38.964504    4119 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0307 19:35:39.609964    4119 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0307 19:35:39.610018    4119 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.019848167s
	I0307 19:35:39.610036    4119 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0307 19:35:40.036159    4119 start.go:360] acquireMachinesLock for test-preload-088000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:35:40.036509    4119 start.go:364] duration metric: took 265.833µs to acquireMachinesLock for "test-preload-088000"
	I0307 19:35:40.036610    4119 start.go:93] Provisioning new machine with config: &{Name:test-preload-088000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:35:40.036809    4119 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:35:40.044382    4119 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:35:40.093334    4119 start.go:159] libmachine.API.Create for "test-preload-088000" (driver="qemu2")
	I0307 19:35:40.093396    4119 client.go:168] LocalClient.Create starting
	I0307 19:35:40.093508    4119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:35:40.093560    4119 main.go:141] libmachine: Decoding PEM data...
	I0307 19:35:40.093576    4119 main.go:141] libmachine: Parsing certificate...
	I0307 19:35:40.093645    4119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:35:40.093687    4119 main.go:141] libmachine: Decoding PEM data...
	I0307 19:35:40.093700    4119 main.go:141] libmachine: Parsing certificate...
	I0307 19:35:40.094179    4119 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:35:40.241872    4119 main.go:141] libmachine: Creating SSH key...
	I0307 19:35:40.352140    4119 main.go:141] libmachine: Creating Disk image...
	I0307 19:35:40.352150    4119 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:35:40.352324    4119 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/disk.qcow2
	I0307 19:35:40.365108    4119 main.go:141] libmachine: STDOUT: 
	I0307 19:35:40.365128    4119 main.go:141] libmachine: STDERR: 
	I0307 19:35:40.365192    4119 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/disk.qcow2 +20000M
	I0307 19:35:40.376511    4119 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:35:40.376530    4119 main.go:141] libmachine: STDERR: 
	I0307 19:35:40.376545    4119 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/disk.qcow2
	I0307 19:35:40.376549    4119 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:35:40.376597    4119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:25:84:4b:84:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/test-preload-088000/disk.qcow2
	I0307 19:35:40.378473    4119 main.go:141] libmachine: STDOUT: 
	I0307 19:35:40.378492    4119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:35:40.378509    4119 client.go:171] duration metric: took 285.119791ms to LocalClient.Create
	I0307 19:35:42.187404    4119 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0307 19:35:42.187529    4119 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.597556959s
	I0307 19:35:42.187559    4119 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0307 19:35:42.187610    4119 cache.go:87] Successfully saved all images to host disk.
	I0307 19:35:42.380869    4119 start.go:128] duration metric: took 2.344126542s to createHost
	I0307 19:35:42.380927    4119 start.go:83] releasing machines lock for "test-preload-088000", held for 2.344486958s
	W0307 19:35:42.381253    4119 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-088000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-088000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:35:42.390830    4119 out.go:177] 
	W0307 19:35:42.393788    4119 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:35:42.393813    4119 out.go:239] * 
	* 
	W0307 19:35:42.396780    4119 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:35:42.405745    4119 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-088000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-07 19:35:42.425452 -0800 PST m=+2403.748342042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-088000 -n test-preload-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-088000 -n test-preload-088000: exit status 7 (67.826083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-088000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-088000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-088000
--- FAIL: TestPreload (10.13s)

                                                
                                    
x
+
TestScheduledStopUnix (10.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-429000 --memory=2048 --driver=qemu2 
E0307 19:35:44.670975    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-429000 --memory=2048 --driver=qemu2 : exit status 80 (9.896905292s)

                                                
                                                
-- stdout --
	* [scheduled-stop-429000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-429000" primary control-plane node in "scheduled-stop-429000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-429000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-429000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-429000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-429000" primary control-plane node in "scheduled-stop-429000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-429000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-429000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-07 19:35:52.49763 -0800 PST m=+2413.820932376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-429000 -n scheduled-stop-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-429000 -n scheduled-stop-429000: exit status 7 (69.299708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-429000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-429000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-429000
--- FAIL: TestScheduledStopUnix (10.07s)

                                                
                                    
x
+
TestSkaffold (16.62s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3685679188 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-111000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-111000 --memory=2600 --driver=qemu2 : exit status 80 (9.855523333s)

                                                
                                                
-- stdout --
	* [skaffold-111000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-111000" primary control-plane node in "skaffold-111000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-111000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-111000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-111000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-111000" primary control-plane node in "skaffold-111000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-111000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-111000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-07 19:36:09.113015 -0800 PST m=+2430.436997251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-111000 -n skaffold-111000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-111000 -n skaffold-111000: exit status 7 (66.498917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-111000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-111000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-111000
--- FAIL: TestSkaffold (16.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (634.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1919143745 start -p running-upgrade-440000 --memory=2200 --vm-driver=qemu2 
E0307 19:37:37.704241    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1919143745 start -p running-upgrade-440000 --memory=2200 --vm-driver=qemu2 : (1m22.900112375s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-440000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0307 19:38:47.729889    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-440000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m34.395662917s)

                                                
                                                
-- stdout --
	* [running-upgrade-440000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-440000" primary control-plane node in "running-upgrade-440000" cluster
	* Updating the running qemu2 "running-upgrade-440000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:38:17.883071    4574 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:38:17.883203    4574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:38:17.883207    4574 out.go:304] Setting ErrFile to fd 2...
	I0307 19:38:17.883209    4574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:38:17.883350    4574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:38:17.884487    4574 out.go:298] Setting JSON to false
	I0307 19:38:17.900873    4574 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4069,"bootTime":1709865028,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:38:17.900937    4574 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:38:17.905869    4574 out.go:177] * [running-upgrade-440000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:38:17.912846    4574 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:38:17.912908    4574 notify.go:220] Checking for updates...
	I0307 19:38:17.919807    4574 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:38:17.923861    4574 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:38:17.926812    4574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:38:17.929859    4574 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:38:17.932834    4574 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:38:17.935993    4574 config.go:182] Loaded profile config "running-upgrade-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:38:17.938783    4574 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 19:38:17.941813    4574 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:38:17.944778    4574 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:38:17.951845    4574 start.go:297] selected driver: qemu2
	I0307 19:38:17.951851    4574 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-440000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50311 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-440000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 19:38:17.951899    4574 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:38:17.954125    4574 cni.go:84] Creating CNI manager for ""
	I0307 19:38:17.954142    4574 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:38:17.954172    4574 start.go:340] cluster config:
	{Name:running-upgrade-440000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50311 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-440000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 19:38:17.954219    4574 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:38:17.961835    4574 out.go:177] * Starting "running-upgrade-440000" primary control-plane node in "running-upgrade-440000" cluster
	I0307 19:38:17.965868    4574 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 19:38:17.965880    4574 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0307 19:38:17.965888    4574 cache.go:56] Caching tarball of preloaded images
	I0307 19:38:17.965932    4574 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:38:17.965937    4574 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0307 19:38:17.965982    4574 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/config.json ...
	I0307 19:38:17.966417    4574 start.go:360] acquireMachinesLock for running-upgrade-440000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:38:17.966442    4574 start.go:364] duration metric: took 19.25µs to acquireMachinesLock for "running-upgrade-440000"
	I0307 19:38:17.966449    4574 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:38:17.966453    4574 fix.go:54] fixHost starting: 
	I0307 19:38:17.967152    4574 fix.go:112] recreateIfNeeded on running-upgrade-440000: state=Running err=<nil>
	W0307 19:38:17.967160    4574 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:38:17.975800    4574 out.go:177] * Updating the running qemu2 "running-upgrade-440000" VM ...
	I0307 19:38:17.979795    4574 machine.go:94] provisionDockerMachine start ...
	I0307 19:38:17.979846    4574 main.go:141] libmachine: Using SSH client type: native
	I0307 19:38:17.979976    4574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102961a30] 0x102964290 <nil>  [] 0s} localhost 50279 <nil> <nil>}
	I0307 19:38:17.979981    4574 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 19:38:18.033476    4574 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-440000
	
	I0307 19:38:18.033488    4574 buildroot.go:166] provisioning hostname "running-upgrade-440000"
	I0307 19:38:18.033526    4574 main.go:141] libmachine: Using SSH client type: native
	I0307 19:38:18.033626    4574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102961a30] 0x102964290 <nil>  [] 0s} localhost 50279 <nil> <nil>}
	I0307 19:38:18.033632    4574 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-440000 && echo "running-upgrade-440000" | sudo tee /etc/hostname
	I0307 19:38:18.089733    4574 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-440000
	
	I0307 19:38:18.089780    4574 main.go:141] libmachine: Using SSH client type: native
	I0307 19:38:18.089877    4574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102961a30] 0x102964290 <nil>  [] 0s} localhost 50279 <nil> <nil>}
	I0307 19:38:18.089885    4574 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-440000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-440000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-440000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 19:38:18.139518    4574 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 19:38:18.139531    4574 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18333-1199/.minikube CaCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18333-1199/.minikube}
	I0307 19:38:18.139539    4574 buildroot.go:174] setting up certificates
	I0307 19:38:18.139543    4574 provision.go:84] configureAuth start
	I0307 19:38:18.139549    4574 provision.go:143] copyHostCerts
	I0307 19:38:18.139605    4574 exec_runner.go:144] found /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.pem, removing ...
	I0307 19:38:18.139613    4574 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.pem
	I0307 19:38:18.139708    4574 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.pem (1082 bytes)
	I0307 19:38:18.139861    4574 exec_runner.go:144] found /Users/jenkins/minikube-integration/18333-1199/.minikube/cert.pem, removing ...
	I0307 19:38:18.139867    4574 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18333-1199/.minikube/cert.pem
	I0307 19:38:18.139910    4574 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/cert.pem (1123 bytes)
	I0307 19:38:18.140001    4574 exec_runner.go:144] found /Users/jenkins/minikube-integration/18333-1199/.minikube/key.pem, removing ...
	I0307 19:38:18.140006    4574 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18333-1199/.minikube/key.pem
	I0307 19:38:18.140044    4574 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/key.pem (1675 bytes)
	I0307 19:38:18.140125    4574 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-440000 san=[127.0.0.1 localhost minikube running-upgrade-440000]
	I0307 19:38:18.257280    4574 provision.go:177] copyRemoteCerts
	I0307 19:38:18.257308    4574 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 19:38:18.257316    4574 sshutil.go:53] new ssh client: &{IP:localhost Port:50279 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/running-upgrade-440000/id_rsa Username:docker}
	I0307 19:38:18.287788    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0307 19:38:18.294590    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 19:38:18.301860    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 19:38:18.308560    4574 provision.go:87] duration metric: took 169.019209ms to configureAuth
	I0307 19:38:18.308570    4574 buildroot.go:189] setting minikube options for container-runtime
	I0307 19:38:18.308675    4574 config.go:182] Loaded profile config "running-upgrade-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:38:18.308713    4574 main.go:141] libmachine: Using SSH client type: native
	I0307 19:38:18.308802    4574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102961a30] 0x102964290 <nil>  [] 0s} localhost 50279 <nil> <nil>}
	I0307 19:38:18.308806    4574 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 19:38:18.362864    4574 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 19:38:18.362876    4574 buildroot.go:70] root file system type: tmpfs
	I0307 19:38:18.362925    4574 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 19:38:18.362974    4574 main.go:141] libmachine: Using SSH client type: native
	I0307 19:38:18.363068    4574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102961a30] 0x102964290 <nil>  [] 0s} localhost 50279 <nil> <nil>}
	I0307 19:38:18.363102    4574 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 19:38:18.419484    4574 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 19:38:18.419525    4574 main.go:141] libmachine: Using SSH client type: native
	I0307 19:38:18.419632    4574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102961a30] 0x102964290 <nil>  [] 0s} localhost 50279 <nil> <nil>}
	I0307 19:38:18.419641    4574 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 19:38:18.473971    4574 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 19:38:18.473982    4574 machine.go:97] duration metric: took 494.200834ms to provisionDockerMachine
	I0307 19:38:18.473987    4574 start.go:293] postStartSetup for "running-upgrade-440000" (driver="qemu2")
	I0307 19:38:18.473993    4574 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 19:38:18.474054    4574 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 19:38:18.474063    4574 sshutil.go:53] new ssh client: &{IP:localhost Port:50279 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/running-upgrade-440000/id_rsa Username:docker}
	I0307 19:38:18.503018    4574 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 19:38:18.504218    4574 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 19:38:18.504226    4574 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18333-1199/.minikube/addons for local assets ...
	I0307 19:38:18.504275    4574 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18333-1199/.minikube/files for local assets ...
	I0307 19:38:18.504360    4574 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem -> 16202.pem in /etc/ssl/certs
	I0307 19:38:18.504444    4574 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 19:38:18.507027    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem --> /etc/ssl/certs/16202.pem (1708 bytes)
	I0307 19:38:18.513912    4574 start.go:296] duration metric: took 39.921584ms for postStartSetup
	I0307 19:38:18.513925    4574 fix.go:56] duration metric: took 547.49475ms for fixHost
	I0307 19:38:18.513955    4574 main.go:141] libmachine: Using SSH client type: native
	I0307 19:38:18.514050    4574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102961a30] 0x102964290 <nil>  [] 0s} localhost 50279 <nil> <nil>}
	I0307 19:38:18.514054    4574 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0307 19:38:18.567191    4574 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709869098.741015474
	
	I0307 19:38:18.567200    4574 fix.go:216] guest clock: 1709869098.741015474
	I0307 19:38:18.567204    4574 fix.go:229] Guest: 2024-03-07 19:38:18.741015474 -0800 PST Remote: 2024-03-07 19:38:18.513926 -0800 PST m=+0.652375501 (delta=227.089474ms)
	I0307 19:38:18.567215    4574 fix.go:200] guest clock delta is within tolerance: 227.089474ms
	I0307 19:38:18.567218    4574 start.go:83] releasing machines lock for "running-upgrade-440000", held for 600.7975ms
	I0307 19:38:18.567279    4574 ssh_runner.go:195] Run: cat /version.json
	I0307 19:38:18.567283    4574 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 19:38:18.567287    4574 sshutil.go:53] new ssh client: &{IP:localhost Port:50279 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/running-upgrade-440000/id_rsa Username:docker}
	I0307 19:38:18.567297    4574 sshutil.go:53] new ssh client: &{IP:localhost Port:50279 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/running-upgrade-440000/id_rsa Username:docker}
	W0307 19:38:18.567917    4574 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50279: connect: connection refused
	I0307 19:38:18.567936    4574 retry.go:31] will retry after 328.796423ms: dial tcp [::1]:50279: connect: connection refused
	W0307 19:38:18.941554    4574 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0307 19:38:18.941727    4574 ssh_runner.go:195] Run: systemctl --version
	I0307 19:38:18.945323    4574 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 19:38:18.948119    4574 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 19:38:18.948162    4574 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0307 19:38:18.953128    4574 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0307 19:38:18.959804    4574 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 19:38:18.959818    4574 start.go:494] detecting cgroup driver to use...
	I0307 19:38:18.959961    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 19:38:18.967079    4574 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0307 19:38:18.971203    4574 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 19:38:18.974822    4574 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 19:38:18.974853    4574 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 19:38:18.978355    4574 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 19:38:18.981805    4574 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 19:38:18.984931    4574 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 19:38:18.988058    4574 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 19:38:18.991058    4574 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 19:38:18.994510    4574 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 19:38:18.997392    4574 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 19:38:18.999855    4574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:38:19.099316    4574 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 19:38:19.105652    4574 start.go:494] detecting cgroup driver to use...
	I0307 19:38:19.105711    4574 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 19:38:19.112539    4574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 19:38:19.117814    4574 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 19:38:19.129908    4574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 19:38:19.134438    4574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 19:38:19.139172    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 19:38:19.144148    4574 ssh_runner.go:195] Run: which cri-dockerd
	I0307 19:38:19.145330    4574 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 19:38:19.147712    4574 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 19:38:19.152731    4574 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 19:38:19.234717    4574 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 19:38:19.330710    4574 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 19:38:19.330782    4574 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 19:38:19.336136    4574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:38:19.424698    4574 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 19:38:32.083622    4574 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.65942575s)
	I0307 19:38:32.083687    4574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 19:38:32.090789    4574 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 19:38:32.099096    4574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 19:38:32.104031    4574 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 19:38:32.180883    4574 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 19:38:32.258350    4574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:38:32.339188    4574 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 19:38:32.345657    4574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 19:38:32.349917    4574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:38:32.428340    4574 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 19:38:32.470441    4574 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 19:38:32.470519    4574 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 19:38:32.473091    4574 start.go:562] Will wait 60s for crictl version
	I0307 19:38:32.473143    4574 ssh_runner.go:195] Run: which crictl
	I0307 19:38:32.474617    4574 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 19:38:32.486684    4574 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0307 19:38:32.486758    4574 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 19:38:32.498976    4574 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 19:38:32.515932    4574 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0307 19:38:32.516045    4574 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0307 19:38:32.517349    4574 kubeadm.go:877] updating cluster {Name:running-upgrade-440000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50311 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-440000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0307 19:38:32.517392    4574 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 19:38:32.517427    4574 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 19:38:32.527534    4574 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 19:38:32.527551    4574 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 19:38:32.527598    4574 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 19:38:32.531000    4574 ssh_runner.go:195] Run: which lz4
	I0307 19:38:32.532286    4574 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0307 19:38:32.533630    4574 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 19:38:32.533642    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0307 19:38:33.225432    4574 docker.go:649] duration metric: took 693.191583ms to copy over tarball
	I0307 19:38:33.225487    4574 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 19:38:34.437217    4574 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.211767292s)
	I0307 19:38:34.437229    4574 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 19:38:34.453381    4574 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 19:38:34.456804    4574 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0307 19:38:34.462244    4574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:38:34.546778    4574 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 19:38:35.777246    4574 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.230503625s)
	I0307 19:38:35.777338    4574 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 19:38:35.789826    4574 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 19:38:35.789835    4574 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 19:38:35.789840    4574 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0307 19:38:35.797470    4574 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:38:35.798938    4574 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0307 19:38:35.799033    4574 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0307 19:38:35.799090    4574 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:38:35.799216    4574 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:38:35.799250    4574 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:38:35.799558    4574 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:38:35.800074    4574 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:38:35.808223    4574 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:38:35.808262    4574 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:38:35.808309    4574 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0307 19:38:35.809064    4574 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:38:35.809083    4574 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0307 19:38:35.809100    4574 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:38:35.809148    4574 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:38:35.809167    4574 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	W0307 19:38:37.777855    4574 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0307 19:38:37.778530    4574 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:38:37.817641    4574 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0307 19:38:37.817687    4574 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:38:37.817784    4574 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:38:37.842132    4574 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0307 19:38:37.842261    4574 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0307 19:38:37.844677    4574 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0307 19:38:37.844691    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0307 19:38:37.867068    4574 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:38:37.886543    4574 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0307 19:38:37.887554    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0307 19:38:37.891465    4574 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0307 19:38:37.891483    4574 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:38:37.891537    4574 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:38:37.897865    4574 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:38:37.902410    4574 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0307 19:38:37.910358    4574 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:38:37.910546    4574 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0307 19:38:37.922460    4574 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:38:37.960164    4574 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0307 19:38:37.960211    4574 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0307 19:38:37.960235    4574 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0307 19:38:37.960248    4574 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0307 19:38:37.960250    4574 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:38:37.960257    4574 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0307 19:38:37.960279    4574 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0307 19:38:37.960288    4574 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:38:37.960293    4574 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0307 19:38:37.960307    4574 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0307 19:38:37.960314    4574 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:38:37.960320    4574 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0307 19:38:37.960293    4574 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:38:37.960311    4574 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0307 19:38:37.960339    4574 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:38:37.960348    4574 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0307 19:38:37.960356    4574 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:38:37.976304    4574 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0307 19:38:37.991106    4574 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0307 19:38:37.991120    4574 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0307 19:38:37.991150    4574 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0307 19:38:37.991210    4574 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0307 19:38:37.991210    4574 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0307 19:38:37.991241    4574 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0307 19:38:37.992791    4574 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0307 19:38:37.992802    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0307 19:38:37.992879    4574 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0307 19:38:37.992888    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0307 19:38:38.001594    4574 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0307 19:38:38.001606    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0307 19:38:38.052448    4574 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0307 19:38:38.194385    4574 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0307 19:38:38.194407    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0307 19:38:38.216261    4574 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0307 19:38:38.216402    4574 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:38:38.345473    4574 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0307 19:38:38.345499    4574 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0307 19:38:38.345517    4574 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:38:38.345570    4574 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:38:39.977436    4574 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.6318965s)
	I0307 19:38:39.977478    4574 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0307 19:38:39.977861    4574 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0307 19:38:39.982817    4574 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0307 19:38:39.982895    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0307 19:38:40.036883    4574 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0307 19:38:40.036899    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0307 19:38:40.275818    4574 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0307 19:38:40.275861    4574 cache_images.go:92] duration metric: took 4.486197625s to LoadCachedImages
	W0307 19:38:40.275905    4574 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0307 19:38:40.275911    4574 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0307 19:38:40.275963    4574 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-440000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-440000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 19:38:40.276023    4574 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 19:38:40.289386    4574 cni.go:84] Creating CNI manager for ""
	I0307 19:38:40.289397    4574 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:38:40.289401    4574 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 19:38:40.289410    4574 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-440000 NodeName:running-upgrade-440000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 19:38:40.289471    4574 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-440000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 19:38:40.289527    4574 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0307 19:38:40.292313    4574 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 19:38:40.292343    4574 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 19:38:40.294864    4574 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0307 19:38:40.299608    4574 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 19:38:40.304564    4574 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0307 19:38:40.309935    4574 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0307 19:38:40.311407    4574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:38:40.388487    4574 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 19:38:40.393872    4574 certs.go:68] Setting up /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000 for IP: 10.0.2.15
	I0307 19:38:40.393879    4574 certs.go:194] generating shared ca certs ...
	I0307 19:38:40.393890    4574 certs.go:226] acquiring lock for ca certs: {Name:mkeed6c4d5ba27d3ef2bc04c52c43819ca546cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:38:40.394033    4574 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.key
	I0307 19:38:40.394065    4574 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.key
	I0307 19:38:40.394071    4574 certs.go:256] generating profile certs ...
	I0307 19:38:40.394131    4574 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/client.key
	I0307 19:38:40.394147    4574 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/apiserver.key.8eb4da73
	I0307 19:38:40.394157    4574 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/apiserver.crt.8eb4da73 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0307 19:38:40.481815    4574 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/apiserver.crt.8eb4da73 ...
	I0307 19:38:40.481821    4574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/apiserver.crt.8eb4da73: {Name:mke7f58f35203a4df907ee69675ae0768eb79c0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:38:40.482044    4574 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/apiserver.key.8eb4da73 ...
	I0307 19:38:40.482049    4574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/apiserver.key.8eb4da73: {Name:mk67096f6c93ee7ec3c02bdf7d690116f8c42bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:38:40.482186    4574 certs.go:381] copying /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/apiserver.crt.8eb4da73 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/apiserver.crt
	I0307 19:38:40.482311    4574 certs.go:385] copying /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/apiserver.key.8eb4da73 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/apiserver.key
	I0307 19:38:40.482440    4574 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/proxy-client.key
	I0307 19:38:40.482547    4574 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/1620.pem (1338 bytes)
	W0307 19:38:40.482566    4574 certs.go:480] ignoring /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/1620_empty.pem, impossibly tiny 0 bytes
	I0307 19:38:40.482570    4574 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 19:38:40.482592    4574 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem (1082 bytes)
	I0307 19:38:40.482608    4574 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem (1123 bytes)
	I0307 19:38:40.482624    4574 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem (1675 bytes)
	I0307 19:38:40.482660    4574 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem (1708 bytes)
	I0307 19:38:40.482945    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 19:38:40.489833    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 19:38:40.497784    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 19:38:40.506065    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 19:38:40.513202    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0307 19:38:40.520678    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 19:38:40.527994    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 19:38:40.534880    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 19:38:40.541593    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/1620.pem --> /usr/share/ca-certificates/1620.pem (1338 bytes)
	I0307 19:38:40.548895    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem --> /usr/share/ca-certificates/16202.pem (1708 bytes)
	I0307 19:38:40.555772    4574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 19:38:40.562389    4574 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 19:38:40.567529    4574 ssh_runner.go:195] Run: openssl version
	I0307 19:38:40.569361    4574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1620.pem && ln -fs /usr/share/ca-certificates/1620.pem /etc/ssl/certs/1620.pem"
	I0307 19:38:40.572810    4574 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1620.pem
	I0307 19:38:40.574149    4574 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:04 /usr/share/ca-certificates/1620.pem
	I0307 19:38:40.574169    4574 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1620.pem
	I0307 19:38:40.575966    4574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1620.pem /etc/ssl/certs/51391683.0"
	I0307 19:38:40.578623    4574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16202.pem && ln -fs /usr/share/ca-certificates/16202.pem /etc/ssl/certs/16202.pem"
	I0307 19:38:40.581743    4574 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16202.pem
	I0307 19:38:40.583127    4574 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:04 /usr/share/ca-certificates/16202.pem
	I0307 19:38:40.583146    4574 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16202.pem
	I0307 19:38:40.584814    4574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16202.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 19:38:40.587822    4574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 19:38:40.590814    4574 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:38:40.592166    4574 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:57 /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:38:40.592183    4574 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:38:40.593825    4574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 19:38:40.596537    4574 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 19:38:40.597921    4574 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 19:38:40.599780    4574 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 19:38:40.601671    4574 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 19:38:40.603528    4574 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 19:38:40.605306    4574 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 19:38:40.607256    4574 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 19:38:40.609118    4574 kubeadm.go:391] StartCluster: {Name:running-upgrade-440000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50311 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-440000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 19:38:40.609182    4574 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 19:38:40.618811    4574 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 19:38:40.622028    4574 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 19:38:40.622036    4574 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 19:38:40.622039    4574 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 19:38:40.622061    4574 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 19:38:40.625048    4574 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 19:38:40.625286    4574 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-440000" does not appear in /Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:38:40.625333    4574 kubeconfig.go:62] /Users/jenkins/minikube-integration/18333-1199/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-440000" cluster setting kubeconfig missing "running-upgrade-440000" context setting]
	I0307 19:38:40.625495    4574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/kubeconfig: {Name:mk4dcca67acc40e2ef9a6fcc3838689fa74c4a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:38:40.626946    4574 kapi.go:59] client config for running-upgrade-440000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/client.key", CAFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c576a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 19:38:40.627254    4574 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 19:38:40.630471    4574 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-440000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0307 19:38:40.630477    4574 kubeadm.go:1153] stopping kube-system containers ...
	I0307 19:38:40.630522    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 19:38:40.641549    4574 docker.go:483] Stopping containers: [54f37fea78c7 ddb9779e6d97 1ba821cda6b5 e775b0f452d6 a1200a15ef20 6d03d913decf e77fdd625530 bac00c8cd148 12252a08a047 00bc549e717f 9d7199e249e4 87cc7f63b99d]
	I0307 19:38:40.641609    4574 ssh_runner.go:195] Run: docker stop 54f37fea78c7 ddb9779e6d97 1ba821cda6b5 e775b0f452d6 a1200a15ef20 6d03d913decf e77fdd625530 bac00c8cd148 12252a08a047 00bc549e717f 9d7199e249e4 87cc7f63b99d
	I0307 19:38:40.652878    4574 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 19:38:40.765188    4574 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 19:38:40.769617    4574 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Mar  8 03:38 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Mar  8 03:38 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar  8 03:38 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Mar  8 03:38 /etc/kubernetes/scheduler.conf
	
	I0307 19:38:40.769652    4574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/admin.conf
	I0307 19:38:40.773188    4574 kubeadm.go:162] "https://control-plane.minikube.internal:50311" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 19:38:40.773221    4574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 19:38:40.776769    4574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/kubelet.conf
	I0307 19:38:40.779876    4574 kubeadm.go:162] "https://control-plane.minikube.internal:50311" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 19:38:40.779903    4574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 19:38:40.782811    4574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/controller-manager.conf
	I0307 19:38:40.785671    4574 kubeadm.go:162] "https://control-plane.minikube.internal:50311" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 19:38:40.785695    4574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 19:38:40.788612    4574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/scheduler.conf
	I0307 19:38:40.791124    4574 kubeadm.go:162] "https://control-plane.minikube.internal:50311" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 19:38:40.791144    4574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 19:38:40.793823    4574 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 19:38:40.797215    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:38:40.821695    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:38:41.463449    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:38:41.719272    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:38:41.754504    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:38:41.775132    4574 api_server.go:52] waiting for apiserver process to appear ...
	I0307 19:38:41.775213    4574 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:38:42.277268    4574 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:38:42.777218    4574 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:38:42.781283    4574 api_server.go:72] duration metric: took 1.00619275s to wait for apiserver process to appear ...
	I0307 19:38:42.781294    4574 api_server.go:88] waiting for apiserver healthz status ...
	I0307 19:38:42.781304    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:38:47.781584    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:38:47.781621    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:38:52.783104    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:38:52.783179    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:38:57.783827    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:38:57.783869    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:39:02.784363    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:39:02.784479    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:39:07.785490    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:39:07.785572    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:39:12.786967    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:39:12.787047    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:39:17.788810    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:39:17.788894    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:39:22.791216    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:39:22.791293    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:39:27.793142    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:39:27.793222    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:39:32.795603    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:39:32.795685    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:39:37.798219    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:39:37.798300    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:39:42.798941    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:39:42.799214    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:39:42.826743    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:39:42.826857    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:39:42.842389    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:39:42.842493    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:39:42.854584    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:39:42.854655    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:39:42.865710    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:39:42.865790    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:39:42.877213    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:39:42.877276    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:39:42.887631    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:39:42.887695    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:39:42.897448    4574 logs.go:276] 0 containers: []
	W0307 19:39:42.897459    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:39:42.897515    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:39:42.907371    4574 logs.go:276] 0 containers: []
	W0307 19:39:42.907386    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:39:42.907394    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:39:42.907400    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:39:42.945214    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:39:42.945226    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:39:42.965439    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:39:42.965450    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:39:42.982218    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:39:42.982228    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:39:42.996965    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:39:42.996979    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:39:43.009317    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:39:43.009330    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:39:43.013547    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:39:43.013556    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:39:43.087375    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:39:43.087389    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:39:43.101108    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:39:43.101122    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:39:43.114120    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:39:43.114134    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:39:43.125653    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:39:43.125665    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:39:43.151787    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:39:43.151795    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:39:43.166579    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:39:43.166590    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:39:43.177574    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:39:43.177588    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:39:43.191422    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:39:43.191434    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:39:45.705310    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:39:50.707520    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:39:50.707666    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:39:50.732488    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:39:50.732597    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:39:50.748768    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:39:50.748852    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:39:50.762498    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:39:50.762568    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:39:50.777965    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:39:50.778036    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:39:50.788345    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:39:50.788421    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:39:50.798310    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:39:50.798370    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:39:50.807854    4574 logs.go:276] 0 containers: []
	W0307 19:39:50.807868    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:39:50.807918    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:39:50.817875    4574 logs.go:276] 0 containers: []
	W0307 19:39:50.817887    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:39:50.817895    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:39:50.817900    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:39:50.829936    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:39:50.829949    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:39:50.843983    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:39:50.844001    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:39:50.855036    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:39:50.855048    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:39:50.866278    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:39:50.866288    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:39:50.886964    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:39:50.886977    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:39:50.898595    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:39:50.898606    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:39:50.924590    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:39:50.924598    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:39:50.935798    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:39:50.935810    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:39:50.940204    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:39:50.940212    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:39:50.976707    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:39:50.976722    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:39:50.990605    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:39:50.990618    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:39:51.027402    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:39:51.027410    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:39:51.042325    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:39:51.042340    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:39:51.055716    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:39:51.055728    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:39:53.571062    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:39:58.573651    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:39:58.574035    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:39:58.614241    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:39:58.614366    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:39:58.635982    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:39:58.636088    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:39:58.651459    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:39:58.651530    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:39:58.665610    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:39:58.665682    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:39:58.676005    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:39:58.676071    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:39:58.686529    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:39:58.686599    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:39:58.709243    4574 logs.go:276] 0 containers: []
	W0307 19:39:58.709255    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:39:58.709314    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:39:58.719092    4574 logs.go:276] 0 containers: []
	W0307 19:39:58.719104    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:39:58.719112    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:39:58.719117    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:39:58.735210    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:39:58.735223    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:39:58.747010    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:39:58.747023    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:39:58.751681    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:39:58.751689    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:39:58.786240    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:39:58.786253    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:39:58.800437    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:39:58.800448    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:39:58.814563    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:39:58.814573    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:39:58.826002    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:39:58.826014    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:39:58.839666    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:39:58.839677    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:39:58.853961    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:39:58.853974    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:39:58.880339    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:39:58.880348    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:39:58.891457    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:39:58.891468    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:39:58.926517    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:39:58.926528    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:39:58.944101    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:39:58.944114    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:39:58.956965    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:39:58.956979    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:40:01.476137    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:40:06.478501    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:40:06.478765    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:40:06.510432    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:40:06.510551    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:40:06.527987    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:40:06.528067    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:40:06.541687    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:40:06.541758    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:40:06.553002    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:40:06.553071    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:40:06.563552    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:40:06.563620    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:40:06.574042    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:40:06.574109    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:40:06.584434    4574 logs.go:276] 0 containers: []
	W0307 19:40:06.584446    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:40:06.584501    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:40:06.598432    4574 logs.go:276] 0 containers: []
	W0307 19:40:06.598443    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:40:06.598451    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:40:06.598457    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:40:06.612405    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:40:06.612415    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:40:06.627186    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:40:06.627201    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:40:06.640166    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:40:06.640178    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:40:06.654264    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:40:06.654277    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:40:06.668498    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:40:06.668510    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:40:06.679514    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:40:06.679527    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:40:06.697073    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:40:06.697084    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:40:06.708661    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:40:06.708671    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:40:06.712812    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:40:06.712817    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:40:06.749382    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:40:06.749393    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:40:06.760693    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:40:06.760703    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:40:06.774708    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:40:06.774721    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:40:06.799268    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:40:06.799280    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:40:06.810772    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:40:06.810781    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:40:09.352725    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:40:14.355316    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:40:14.355511    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:40:14.373381    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:40:14.373455    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:40:14.384984    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:40:14.385066    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:40:14.401013    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:40:14.401079    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:40:14.411599    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:40:14.411671    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:40:14.423121    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:40:14.423194    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:40:14.434151    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:40:14.434221    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:40:14.444852    4574 logs.go:276] 0 containers: []
	W0307 19:40:14.444866    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:40:14.444921    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:40:14.455498    4574 logs.go:276] 0 containers: []
	W0307 19:40:14.455509    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:40:14.455529    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:40:14.455537    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:40:14.493099    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:40:14.493113    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:40:14.509138    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:40:14.509148    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:40:14.534669    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:40:14.534684    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:40:14.549260    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:40:14.549271    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:40:14.564472    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:40:14.564482    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:40:14.576065    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:40:14.576076    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:40:14.587832    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:40:14.587843    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:40:14.599835    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:40:14.599849    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:40:14.604164    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:40:14.604175    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:40:14.641948    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:40:14.641960    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:40:14.657475    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:40:14.657490    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:40:14.670674    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:40:14.670688    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:40:14.685084    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:40:14.685095    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:40:14.700612    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:40:14.700624    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:40:17.220565    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:40:22.222964    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:40:22.223160    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:40:22.234514    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:40:22.234587    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:40:22.245003    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:40:22.245085    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:40:22.255721    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:40:22.255788    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:40:22.266108    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:40:22.266189    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:40:22.276397    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:40:22.276466    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:40:22.286908    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:40:22.286976    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:40:22.297482    4574 logs.go:276] 0 containers: []
	W0307 19:40:22.297493    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:40:22.297552    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:40:22.308030    4574 logs.go:276] 0 containers: []
	W0307 19:40:22.308043    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:40:22.308051    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:40:22.308056    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:40:22.319701    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:40:22.319713    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:40:22.335529    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:40:22.335541    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:40:22.360244    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:40:22.360255    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:40:22.406998    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:40:22.407010    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:40:22.419532    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:40:22.419542    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:40:22.439738    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:40:22.439748    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:40:22.452394    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:40:22.452408    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:40:22.466448    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:40:22.466459    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:40:22.503104    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:40:22.503114    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:40:22.507319    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:40:22.507326    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:40:22.521590    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:40:22.521602    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:40:22.533108    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:40:22.533118    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:40:22.544809    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:40:22.544820    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:40:22.558727    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:40:22.558738    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:40:25.075547    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:40:30.077599    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:40:30.077779    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:40:30.098144    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:40:30.098237    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:40:30.113156    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:40:30.113232    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:40:30.125444    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:40:30.125508    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:40:30.136438    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:40:30.136498    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:40:30.147975    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:40:30.148045    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:40:30.161088    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:40:30.161154    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:40:30.171529    4574 logs.go:276] 0 containers: []
	W0307 19:40:30.171543    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:40:30.171599    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:40:30.181424    4574 logs.go:276] 0 containers: []
	W0307 19:40:30.181437    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:40:30.181444    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:40:30.181449    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:40:30.195853    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:40:30.195864    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:40:30.206829    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:40:30.206843    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:40:30.223685    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:40:30.223696    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:40:30.260815    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:40:30.260825    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:40:30.278521    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:40:30.278535    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:40:30.292725    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:40:30.292736    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:40:30.307189    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:40:30.307200    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:40:30.311344    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:40:30.311350    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:40:30.325375    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:40:30.325386    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:40:30.351149    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:40:30.351160    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:40:30.364527    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:40:30.364537    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:40:30.376269    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:40:30.376281    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:40:30.388538    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:40:30.388548    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:40:30.424195    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:40:30.424206    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:40:32.939188    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:40:37.940672    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:40:37.940771    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:40:37.951957    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:40:37.952033    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:40:37.967828    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:40:37.967896    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:40:37.980533    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:40:37.980610    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:40:37.992172    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:40:37.992253    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:40:38.002700    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:40:38.002770    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:40:38.013699    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:40:38.013769    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:40:38.024825    4574 logs.go:276] 0 containers: []
	W0307 19:40:38.024844    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:40:38.024904    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:40:38.035489    4574 logs.go:276] 0 containers: []
	W0307 19:40:38.035500    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:40:38.035508    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:40:38.035514    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:40:38.050212    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:40:38.050222    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:40:38.075562    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:40:38.075569    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:40:38.110947    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:40:38.110961    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:40:38.124530    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:40:38.124541    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:40:38.138790    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:40:38.138800    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:40:38.155040    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:40:38.155051    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:40:38.166862    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:40:38.166872    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:40:38.177965    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:40:38.177977    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:40:38.199880    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:40:38.199891    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:40:38.210973    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:40:38.210984    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:40:38.223201    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:40:38.223215    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:40:38.259030    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:40:38.259044    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:40:38.263292    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:40:38.263301    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:40:38.277161    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:40:38.277170    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:40:40.792584    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:40:45.795218    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:40:45.795625    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:40:45.847432    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:40:45.847564    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:40:45.866271    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:40:45.866362    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:40:45.880417    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:40:45.880491    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:40:45.891828    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:40:45.891895    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:40:45.904080    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:40:45.904142    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:40:45.914446    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:40:45.914517    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:40:45.929431    4574 logs.go:276] 0 containers: []
	W0307 19:40:45.929450    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:40:45.929524    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:40:45.942921    4574 logs.go:276] 0 containers: []
	W0307 19:40:45.942937    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:40:45.942946    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:40:45.942951    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:40:45.957615    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:40:45.957628    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:40:45.969113    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:40:45.969125    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:40:45.980510    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:40:45.980521    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:40:45.991574    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:40:45.991583    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:40:46.005721    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:40:46.005731    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:40:46.023041    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:40:46.023053    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:40:46.047028    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:40:46.047037    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:40:46.084776    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:40:46.084787    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:40:46.089503    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:40:46.089511    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:40:46.103371    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:40:46.103381    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:40:46.138554    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:40:46.138562    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:40:46.167237    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:40:46.167249    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:40:46.185022    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:40:46.185034    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:40:46.197049    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:40:46.197060    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:40:48.713357    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:40:53.715800    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:40:53.716059    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:40:53.733575    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:40:53.733659    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:40:53.746857    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:40:53.746935    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:40:53.758021    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:40:53.758087    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:40:53.768276    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:40:53.768344    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:40:53.779248    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:40:53.779320    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:40:53.789925    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:40:53.789991    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:40:53.803619    4574 logs.go:276] 0 containers: []
	W0307 19:40:53.803631    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:40:53.803689    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:40:53.813399    4574 logs.go:276] 0 containers: []
	W0307 19:40:53.813409    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:40:53.813415    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:40:53.813420    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:40:53.817654    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:40:53.817660    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:40:53.856488    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:40:53.856501    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:40:53.869834    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:40:53.869845    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:40:53.889674    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:40:53.889685    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:40:53.901842    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:40:53.901854    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:40:53.917182    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:40:53.917194    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:40:53.928964    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:40:53.928975    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:40:53.946582    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:40:53.946592    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:40:53.970510    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:40:53.970517    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:40:53.984710    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:40:53.984722    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:40:53.996359    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:40:53.996368    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:40:54.032647    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:40:54.032654    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:40:54.047260    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:40:54.047272    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:40:54.063580    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:40:54.063593    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:40:56.576617    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:01.578796    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:01.579015    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:01.603315    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:01.603431    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:01.620359    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:01.620440    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:01.633728    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:01.633800    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:01.644540    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:01.644601    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:01.654659    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:01.654728    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:01.665344    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:01.665416    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:01.675488    4574 logs.go:276] 0 containers: []
	W0307 19:41:01.675501    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:01.675555    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:01.685655    4574 logs.go:276] 0 containers: []
	W0307 19:41:01.685667    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:01.685678    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:01.685684    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:01.725541    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:01.725553    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:01.741026    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:01.741036    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:01.755094    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:01.755104    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:01.767347    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:01.767358    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:01.772413    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:01.772422    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:01.786571    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:01.786581    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:01.797489    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:01.797501    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:01.822549    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:01.822558    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:01.835185    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:01.835197    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:01.852879    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:01.852889    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:01.888821    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:01.888831    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:01.901466    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:01.901476    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:01.924226    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:01.924236    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:01.938898    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:01.938912    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:04.453042    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:09.455691    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:09.455933    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:09.478796    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:09.478910    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:09.496394    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:09.496486    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:09.509347    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:09.509418    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:09.520154    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:09.520219    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:09.530599    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:09.530678    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:09.541540    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:09.541603    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:09.551779    4574 logs.go:276] 0 containers: []
	W0307 19:41:09.551789    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:09.551843    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:09.562046    4574 logs.go:276] 0 containers: []
	W0307 19:41:09.562056    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:09.562063    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:09.562069    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:09.573725    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:09.573735    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:09.590958    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:09.590969    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:09.628524    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:09.628532    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:09.640992    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:09.641002    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:09.654485    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:09.654496    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:09.668620    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:09.668635    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:09.680034    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:09.680044    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:09.684585    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:09.684594    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:09.699019    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:09.699029    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:09.710604    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:09.710616    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:09.746865    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:09.746879    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:09.761392    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:09.761403    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:09.778344    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:09.778356    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:09.803994    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:09.804000    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:12.319591    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:17.321907    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:17.322096    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:17.336262    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:17.336338    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:17.352591    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:17.352660    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:17.363583    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:17.363657    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:17.374193    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:17.374261    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:17.384757    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:17.384824    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:17.395295    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:17.395360    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:17.406169    4574 logs.go:276] 0 containers: []
	W0307 19:41:17.406180    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:17.406236    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:17.417319    4574 logs.go:276] 0 containers: []
	W0307 19:41:17.417331    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:17.417339    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:17.417345    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:17.428534    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:17.428546    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:17.454794    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:17.454806    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:17.460045    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:17.460052    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:17.474153    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:17.474164    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:17.489083    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:17.489095    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:17.501384    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:17.501396    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:17.539217    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:17.539225    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:17.573878    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:17.573893    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:17.588284    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:17.588293    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:17.602942    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:17.602951    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:17.621915    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:17.621927    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:17.633906    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:17.633917    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:17.646201    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:17.646214    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:17.669116    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:17.669127    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:20.195156    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:25.197230    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:25.197432    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:25.213908    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:25.213983    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:25.224411    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:25.224485    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:25.235054    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:25.235126    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:25.246591    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:25.246664    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:25.257601    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:25.257671    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:25.268200    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:25.268265    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:25.278478    4574 logs.go:276] 0 containers: []
	W0307 19:41:25.278488    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:25.278544    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:25.289031    4574 logs.go:276] 0 containers: []
	W0307 19:41:25.289289    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:25.289313    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:25.289412    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:25.330216    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:25.330231    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:25.334523    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:25.334530    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:25.352545    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:25.352555    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:25.377338    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:25.377347    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:25.391889    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:25.391903    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:25.408877    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:25.408892    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:25.428497    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:25.428516    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:25.440153    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:25.440165    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:25.475316    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:25.475328    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:25.489933    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:25.489943    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:25.503669    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:25.503681    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:25.523171    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:25.523182    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:25.535289    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:25.535301    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:25.547887    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:25.547898    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:28.062016    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:33.064365    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:33.064522    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:33.080405    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:33.080492    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:33.093689    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:33.093759    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:33.104497    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:33.104563    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:33.114890    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:33.114963    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:33.125519    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:33.125585    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:33.136118    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:33.136184    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:33.146132    4574 logs.go:276] 0 containers: []
	W0307 19:41:33.146145    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:33.146210    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:33.156339    4574 logs.go:276] 0 containers: []
	W0307 19:41:33.156350    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:33.156358    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:33.156364    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:33.160645    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:33.160651    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:33.172593    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:33.172604    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:33.183354    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:33.183366    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:33.200305    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:33.200315    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:33.235892    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:33.235900    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:33.252790    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:33.252803    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:33.266564    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:33.266575    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:33.280931    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:33.280945    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:33.292506    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:33.292520    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:33.303819    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:33.303829    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:33.339125    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:33.339138    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:33.353142    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:33.353153    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:33.372632    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:33.372644    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:33.396379    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:33.396387    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:35.908632    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:40.910594    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:40.910695    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:40.932472    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:40.932547    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:40.944265    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:40.944336    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:40.954898    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:40.954967    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:40.966425    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:40.966490    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:40.977614    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:40.977678    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:40.988720    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:40.988789    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:40.999338    4574 logs.go:276] 0 containers: []
	W0307 19:41:40.999351    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:40.999408    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:41.010371    4574 logs.go:276] 0 containers: []
	W0307 19:41:41.010384    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:41.010391    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:41.010396    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:41.022751    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:41.022763    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:41.035168    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:41.035181    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:41.049503    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:41.049514    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:41.064620    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:41.064633    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:41.080168    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:41.080183    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:41.100505    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:41.100519    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:41.113298    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:41.113308    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:41.149897    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:41.149908    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:41.164546    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:41.164556    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:41.177287    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:41.177297    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:41.203789    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:41.203801    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:41.220903    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:41.220916    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:41.227116    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:41.227128    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:41.266205    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:41.266216    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:43.782725    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:48.784481    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:48.784665    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:48.808486    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:48.808578    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:48.822194    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:48.822273    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:48.838017    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:48.838112    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:48.848251    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:48.848324    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:48.858772    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:48.858833    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:48.868818    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:48.868886    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:48.879284    4574 logs.go:276] 0 containers: []
	W0307 19:41:48.879299    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:48.879357    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:48.889796    4574 logs.go:276] 0 containers: []
	W0307 19:41:48.889807    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:48.889815    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:48.889820    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:48.903600    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:48.903611    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:48.920728    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:48.920739    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:48.933767    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:48.933781    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:48.948262    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:48.948273    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:48.972165    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:48.972172    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:49.006301    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:49.006312    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:49.020328    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:49.020338    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:49.033199    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:49.033213    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:49.047925    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:49.047937    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:49.060615    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:49.060626    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:49.071940    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:49.071950    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:49.109731    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:49.109742    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:49.114680    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:49.114685    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:49.126068    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:49.126080    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:51.642285    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:56.644382    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:56.644495    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:56.661717    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:56.661795    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:56.683549    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:56.683614    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:56.700590    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:56.700657    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:56.712138    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:56.712208    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:56.722228    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:56.722294    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:56.732707    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:56.732779    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:56.742655    4574 logs.go:276] 0 containers: []
	W0307 19:41:56.742666    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:56.742726    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:56.752743    4574 logs.go:276] 0 containers: []
	W0307 19:41:56.752753    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:56.752761    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:56.752767    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:56.767017    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:56.767033    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:56.778616    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:56.778630    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:56.790442    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:56.790455    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:56.827951    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:56.827959    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:56.841916    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:56.841928    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:56.854959    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:56.854969    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:56.876703    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:56.876712    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:56.894637    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:56.894648    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:56.928751    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:56.928762    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:56.941165    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:56.941176    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:56.952904    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:56.952917    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:56.967416    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:56.967426    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:56.982541    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:56.982551    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:57.005592    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:57.005601    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:59.512206    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:04.514216    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:04.514327    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:04.528464    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:42:04.528562    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:04.540210    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:42:04.540273    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:04.550272    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:42:04.550341    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:04.566085    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:42:04.566160    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:04.577296    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:42:04.577364    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:04.588084    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:42:04.588155    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:04.597928    4574 logs.go:276] 0 containers: []
	W0307 19:42:04.597940    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:04.597995    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:04.608718    4574 logs.go:276] 0 containers: []
	W0307 19:42:04.608731    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:42:04.608739    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:42:04.608746    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:42:04.624251    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:42:04.624267    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:42:04.636361    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:04.636377    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:04.641027    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:42:04.641034    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:42:04.655514    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:42:04.655523    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:42:04.675051    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:42:04.675063    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:42:04.689287    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:42:04.689303    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:42:04.707285    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:42:04.707296    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:04.718562    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:04.718572    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:04.755226    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:04.755235    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:04.790481    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:04.790494    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:04.813511    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:42:04.813519    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:42:04.831430    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:42:04.831441    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:42:04.842676    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:42:04.842687    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:42:04.855612    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:42:04.855625    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:42:07.369277    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:12.371433    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:12.371571    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:12.387623    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:42:12.387710    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:12.399786    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:42:12.399861    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:12.410839    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:42:12.410913    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:12.421207    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:42:12.421275    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:12.431848    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:42:12.431923    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:12.444990    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:42:12.445053    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:12.454848    4574 logs.go:276] 0 containers: []
	W0307 19:42:12.454857    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:12.454911    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:12.465069    4574 logs.go:276] 0 containers: []
	W0307 19:42:12.465082    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:42:12.465090    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:12.465096    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:12.501846    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:42:12.501858    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:42:12.515801    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:42:12.515812    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:42:12.526946    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:12.526957    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:12.531501    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:42:12.531508    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:42:12.542909    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:42:12.542920    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:42:12.562582    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:42:12.562596    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:42:12.574780    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:42:12.574792    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:42:12.590245    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:12.590255    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:12.624096    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:42:12.624112    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:42:12.638181    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:42:12.638191    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:42:12.649398    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:42:12.649409    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:42:12.663343    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:42:12.663354    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:42:12.677596    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:12.677605    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:12.703063    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:42:12.703080    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:15.217633    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:20.219180    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:20.219379    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:20.237098    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:42:20.237186    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:20.250058    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:42:20.250133    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:20.261033    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:42:20.261103    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:20.271434    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:42:20.271514    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:20.281717    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:42:20.281783    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:20.297170    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:42:20.297236    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:20.307314    4574 logs.go:276] 0 containers: []
	W0307 19:42:20.307325    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:20.307382    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:20.317793    4574 logs.go:276] 0 containers: []
	W0307 19:42:20.317805    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:42:20.317814    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:20.317820    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:20.341178    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:42:20.341188    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:42:20.356475    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:42:20.356486    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:42:20.370793    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:42:20.370803    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:42:20.382467    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:20.382479    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:20.416424    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:20.416435    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:20.421071    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:42:20.421079    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:42:20.435227    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:42:20.435237    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:42:20.446797    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:42:20.446808    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:42:20.460874    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:42:20.460888    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:42:20.476303    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:20.476314    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:20.512165    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:42:20.512175    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:42:20.526216    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:42:20.526228    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:42:20.543897    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:42:20.543907    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:20.555880    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:42:20.555893    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:42:23.071640    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:28.074140    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:28.074384    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:28.094772    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:42:28.094874    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:28.109367    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:42:28.109440    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:28.124742    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:42:28.124826    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:28.137143    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:42:28.137224    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:28.149633    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:42:28.149699    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:28.160585    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:42:28.160649    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:28.170283    4574 logs.go:276] 0 containers: []
	W0307 19:42:28.170297    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:28.170346    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:28.180799    4574 logs.go:276] 0 containers: []
	W0307 19:42:28.180811    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:42:28.180819    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:28.180825    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:28.218720    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:42:28.218728    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:42:28.230235    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:42:28.230252    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:42:28.242113    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:28.242126    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:28.246721    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:28.246726    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:28.280701    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:42:28.280712    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:42:28.295376    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:42:28.295386    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:42:28.309676    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:42:28.309688    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:28.322186    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:42:28.322200    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:42:28.337081    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:42:28.337095    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:42:28.351371    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:42:28.351383    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:42:28.363467    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:42:28.363482    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:42:28.380627    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:28.380638    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:28.404435    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:42:28.404445    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:42:28.417376    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:42:28.417387    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:42:30.938116    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:35.940264    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:35.940411    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:35.954157    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:42:35.954232    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:35.966018    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:42:35.966078    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:35.977842    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:42:35.977910    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:35.988709    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:42:35.988781    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:35.999265    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:42:35.999332    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:36.009750    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:42:36.009821    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:36.020528    4574 logs.go:276] 0 containers: []
	W0307 19:42:36.020538    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:36.020592    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:36.030565    4574 logs.go:276] 0 containers: []
	W0307 19:42:36.030575    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:42:36.030583    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:42:36.030591    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:36.043640    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:42:36.043649    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:42:36.057415    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:42:36.057424    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:42:36.072764    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:42:36.072773    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:42:36.086854    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:42:36.086863    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:42:36.100874    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:42:36.100882    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:42:36.115705    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:42:36.115717    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:42:36.127165    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:42:36.127176    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:42:36.144690    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:36.144701    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:36.167341    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:36.167348    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:36.202255    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:36.202262    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:36.206490    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:42:36.206496    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:42:36.218603    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:42:36.218616    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:42:36.233176    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:36.233187    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:36.267600    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:42:36.267611    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:42:38.780495    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:43.782306    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:43.782377    4574 kubeadm.go:591] duration metric: took 4m3.170327917s to restartPrimaryControlPlane
	W0307 19:42:43.782421    4574 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0307 19:42:43.782439    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0307 19:42:44.659504    4574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 19:42:44.664413    4574 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 19:42:44.667118    4574 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 19:42:44.669699    4574 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 19:42:44.669705    4574 kubeadm.go:156] found existing configuration files:
	
	I0307 19:42:44.669725    4574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/admin.conf
	I0307 19:42:44.672342    4574 kubeadm.go:162] "https://control-plane.minikube.internal:50311" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 19:42:44.672364    4574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 19:42:44.675000    4574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/kubelet.conf
	I0307 19:42:44.677405    4574 kubeadm.go:162] "https://control-plane.minikube.internal:50311" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 19:42:44.677424    4574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 19:42:44.680478    4574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/controller-manager.conf
	I0307 19:42:44.683355    4574 kubeadm.go:162] "https://control-plane.minikube.internal:50311" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 19:42:44.683377    4574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 19:42:44.685821    4574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/scheduler.conf
	I0307 19:42:44.688778    4574 kubeadm.go:162] "https://control-plane.minikube.internal:50311" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 19:42:44.688797    4574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 19:42:44.691892    4574 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 19:42:44.708882    4574 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0307 19:42:44.708978    4574 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 19:42:44.753311    4574 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 19:42:44.753368    4574 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 19:42:44.753413    4574 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 19:42:44.802435    4574 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 19:42:44.805694    4574 out.go:204]   - Generating certificates and keys ...
	I0307 19:42:44.805733    4574 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 19:42:44.805766    4574 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 19:42:44.805812    4574 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 19:42:44.805851    4574 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0307 19:42:44.805890    4574 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 19:42:44.805919    4574 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0307 19:42:44.805954    4574 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0307 19:42:44.805996    4574 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0307 19:42:44.806032    4574 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 19:42:44.806083    4574 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 19:42:44.806104    4574 kubeadm.go:309] [certs] Using the existing "sa" key
	I0307 19:42:44.806133    4574 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 19:42:44.893983    4574 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 19:42:45.048357    4574 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 19:42:45.165723    4574 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 19:42:45.257355    4574 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 19:42:45.287974    4574 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 19:42:45.288366    4574 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 19:42:45.288441    4574 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 19:42:45.378136    4574 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 19:42:45.382245    4574 out.go:204]   - Booting up control plane ...
	I0307 19:42:45.382294    4574 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 19:42:45.382332    4574 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 19:42:45.382362    4574 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 19:42:45.382424    4574 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 19:42:45.382504    4574 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 19:42:49.885089    4574 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.506196 seconds
	I0307 19:42:49.885216    4574 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 19:42:49.891887    4574 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 19:42:50.404481    4574 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 19:42:50.404731    4574 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-440000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 19:42:50.915045    4574 kubeadm.go:309] [bootstrap-token] Using token: 4y3wpr.20ebbqchoj7k77el
	I0307 19:42:50.919087    4574 out.go:204]   - Configuring RBAC rules ...
	I0307 19:42:50.919207    4574 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 19:42:50.919281    4574 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 19:42:50.925103    4574 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 19:42:50.926392    4574 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 19:42:50.927556    4574 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 19:42:50.928773    4574 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 19:42:50.932708    4574 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 19:42:51.117061    4574 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 19:42:51.321040    4574 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 19:42:51.321540    4574 kubeadm.go:309] 
	I0307 19:42:51.321644    4574 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 19:42:51.321657    4574 kubeadm.go:309] 
	I0307 19:42:51.321792    4574 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 19:42:51.321805    4574 kubeadm.go:309] 
	I0307 19:42:51.321849    4574 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 19:42:51.321933    4574 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 19:42:51.322024    4574 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 19:42:51.322035    4574 kubeadm.go:309] 
	I0307 19:42:51.322128    4574 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 19:42:51.322195    4574 kubeadm.go:309] 
	I0307 19:42:51.322231    4574 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 19:42:51.322234    4574 kubeadm.go:309] 
	I0307 19:42:51.322258    4574 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 19:42:51.322293    4574 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 19:42:51.322361    4574 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 19:42:51.322368    4574 kubeadm.go:309] 
	I0307 19:42:51.322421    4574 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 19:42:51.322467    4574 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 19:42:51.322470    4574 kubeadm.go:309] 
	I0307 19:42:51.322513    4574 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4y3wpr.20ebbqchoj7k77el \
	I0307 19:42:51.322566    4574 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8b467e6a6e778a0a4b26f2d605aedb7acbb7b86477eb1b7d8cc5affd5ffec0d5 \
	I0307 19:42:51.322578    4574 kubeadm.go:309] 	--control-plane 
	I0307 19:42:51.322586    4574 kubeadm.go:309] 
	I0307 19:42:51.322632    4574 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 19:42:51.322638    4574 kubeadm.go:309] 
	I0307 19:42:51.322677    4574 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4y3wpr.20ebbqchoj7k77el \
	I0307 19:42:51.322731    4574 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8b467e6a6e778a0a4b26f2d605aedb7acbb7b86477eb1b7d8cc5affd5ffec0d5 
	I0307 19:42:51.322804    4574 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 19:42:51.322817    4574 cni.go:84] Creating CNI manager for ""
	I0307 19:42:51.322825    4574 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:42:51.330476    4574 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 19:42:51.334578    4574 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 19:42:51.338036    4574 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 19:42:51.345290    4574 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 19:42:51.345345    4574 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-440000 minikube.k8s.io/updated_at=2024_03_07T19_42_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=running-upgrade-440000 minikube.k8s.io/primary=true
	I0307 19:42:51.345380    4574 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:42:51.388305    4574 ops.go:34] apiserver oom_adj: -16
	I0307 19:42:51.388321    4574 kubeadm.go:1106] duration metric: took 43.0485ms to wait for elevateKubeSystemPrivileges
	W0307 19:42:51.388373    4574 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 19:42:51.388379    4574 kubeadm.go:393] duration metric: took 4m10.789570791s to StartCluster
	I0307 19:42:51.388388    4574 settings.go:142] acquiring lock: {Name:mka91134012bc21ec54a241fdaa124189f2aed4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:42:51.388472    4574 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:42:51.388863    4574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/kubeconfig: {Name:mk4dcca67acc40e2ef9a6fcc3838689fa74c4a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:42:51.389068    4574 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:42:51.392506    4574 out.go:177] * Verifying Kubernetes components...
	I0307 19:42:51.389164    4574 config.go:182] Loaded profile config "running-upgrade-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:42:51.389133    4574 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 19:42:51.399471    4574 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-440000"
	I0307 19:42:51.399484    4574 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-440000"
	W0307 19:42:51.399500    4574 addons.go:243] addon storage-provisioner should already be in state true
	I0307 19:42:51.399510    4574 host.go:66] Checking if "running-upgrade-440000" exists ...
	I0307 19:42:51.399525    4574 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-440000"
	I0307 19:42:51.399540    4574 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-440000"
	I0307 19:42:51.399568    4574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:42:51.400985    4574 kapi.go:59] client config for running-upgrade-440000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/client.key", CAFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c576a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 19:42:51.401114    4574 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-440000"
	W0307 19:42:51.401119    4574 addons.go:243] addon default-storageclass should already be in state true
	I0307 19:42:51.401126    4574 host.go:66] Checking if "running-upgrade-440000" exists ...
	I0307 19:42:51.406508    4574 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:42:51.410467    4574 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:42:51.410473    4574 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 19:42:51.410481    4574 sshutil.go:53] new ssh client: &{IP:localhost Port:50279 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/running-upgrade-440000/id_rsa Username:docker}
	I0307 19:42:51.411350    4574 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 19:42:51.411355    4574 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 19:42:51.411359    4574 sshutil.go:53] new ssh client: &{IP:localhost Port:50279 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/running-upgrade-440000/id_rsa Username:docker}
	I0307 19:42:51.499011    4574 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 19:42:51.504442    4574 api_server.go:52] waiting for apiserver process to appear ...
	I0307 19:42:51.504495    4574 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:42:51.509588    4574 api_server.go:72] duration metric: took 120.504375ms to wait for apiserver process to appear ...
	I0307 19:42:51.509620    4574 api_server.go:88] waiting for apiserver healthz status ...
	I0307 19:42:51.509635    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:51.527203    4574 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 19:42:51.530016    4574 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:42:56.511575    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:56.511611    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:01.511690    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:01.511711    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:06.511829    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:06.511872    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:11.512111    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:11.512151    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:16.512537    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:16.512592    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:21.513108    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:21.513164    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0307 19:43:21.897253    4574 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0307 19:43:21.901229    4574 out.go:177] * Enabled addons: storage-provisioner
	I0307 19:43:21.912108    4574 addons.go:505] duration metric: took 30.524271167s for enable addons: enabled=[storage-provisioner]
	I0307 19:43:26.513900    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:26.513936    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:31.514921    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:31.514968    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:36.516310    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:36.516361    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:41.517700    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:41.517800    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:46.519777    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:46.519800    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:51.521809    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:51.522022    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:51.538703    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:43:51.538790    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:51.563781    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:43:51.563859    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:51.581486    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:43:51.581570    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:51.599794    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:43:51.599867    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:51.610534    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:43:51.610604    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:51.630379    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:43:51.630449    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:51.640768    4574 logs.go:276] 0 containers: []
	W0307 19:43:51.640783    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:51.640843    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:51.650676    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:43:51.650693    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:43:51.650699    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:43:51.661839    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:43:51.661853    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:43:51.675251    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:43:51.675264    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:43:51.692494    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:51.692505    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:51.730614    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:43:51.730627    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:43:51.745433    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:43:51.745446    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:43:51.759510    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:43:51.759521    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:43:51.774209    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:43:51.774220    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:43:51.786060    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:43:51.786071    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:43:51.800708    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:51.800720    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:51.835435    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:51.835449    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:51.840396    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:51.840403    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:51.865004    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:43:51.865012    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:54.378266    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:59.380480    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:59.380744    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:59.408685    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:43:59.408814    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:59.426836    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:43:59.426922    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:59.440507    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:43:59.440580    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:59.451787    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:43:59.451856    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:59.462525    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:43:59.462598    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:59.473744    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:43:59.473807    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:59.484025    4574 logs.go:276] 0 containers: []
	W0307 19:43:59.484040    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:59.484100    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:59.494874    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:43:59.494888    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:43:59.494894    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:43:59.507556    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:43:59.507566    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:59.519398    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:43:59.519411    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:43:59.533954    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:43:59.533963    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:43:59.548718    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:43:59.548729    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:43:59.563749    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:43:59.563760    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:43:59.576466    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:43:59.576478    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:43:59.587989    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:43:59.587999    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:43:59.617464    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:43:59.617475    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:43:59.628948    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:59.628959    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:59.652257    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:59.652267    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:59.687435    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:59.687449    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:59.691823    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:59.691830    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:02.229474    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:07.230708    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:07.230883    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:07.249598    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:07.249699    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:07.263867    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:07.263945    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:07.275802    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:07.275871    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:07.286795    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:07.286875    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:07.297170    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:07.297237    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:07.308072    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:07.308129    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:07.318427    4574 logs.go:276] 0 containers: []
	W0307 19:44:07.318440    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:07.318496    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:07.332573    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:07.332588    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:07.332593    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:07.367779    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:07.367791    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:07.372680    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:07.372688    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:07.384454    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:07.384465    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:07.402061    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:07.402071    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:07.416556    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:07.416566    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:07.428139    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:07.428148    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:07.464287    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:07.464299    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:07.479129    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:07.479140    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:07.494484    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:07.494495    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:07.507855    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:07.507866    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:07.520101    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:07.520111    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:07.535203    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:07.535218    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:10.060106    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:15.062157    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:15.062361    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:15.082197    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:15.082295    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:15.096801    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:15.096875    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:15.108573    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:15.108642    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:15.119396    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:15.119467    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:15.130606    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:15.130679    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:15.141170    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:15.141242    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:15.150967    4574 logs.go:276] 0 containers: []
	W0307 19:44:15.150982    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:15.151036    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:15.161551    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:15.161566    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:15.161571    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:15.196404    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:15.196414    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:15.203345    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:15.203357    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:15.240876    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:15.240887    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:15.252226    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:15.252236    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:15.270164    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:15.270175    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:15.287350    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:15.287363    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:15.300257    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:15.300267    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:15.316197    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:15.316212    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:15.332706    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:15.332716    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:15.344397    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:15.344406    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:15.358521    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:15.358535    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:15.376533    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:15.376548    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:17.902180    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:22.904338    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:22.904624    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:22.931731    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:22.931854    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:22.949388    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:22.949482    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:22.962798    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:22.962877    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:22.974514    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:22.974581    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:22.986695    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:22.986762    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:22.997472    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:22.997538    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:23.007391    4574 logs.go:276] 0 containers: []
	W0307 19:44:23.007403    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:23.007458    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:23.017982    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:23.017997    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:23.018002    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:23.041204    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:23.041210    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:23.075658    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:23.075665    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:23.089762    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:23.089773    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:23.100745    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:23.100756    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:23.115194    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:23.115205    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:23.126647    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:23.126657    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:23.138031    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:23.138041    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:23.142827    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:23.142835    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:23.186575    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:23.186587    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:23.204385    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:23.204399    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:23.216237    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:23.216247    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:23.233914    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:23.233925    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:25.751920    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:30.754450    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:30.754748    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:30.780997    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:30.781106    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:30.803506    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:30.803604    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:30.818037    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:30.818113    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:30.828587    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:30.828655    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:30.839033    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:30.839103    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:30.850117    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:30.850183    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:30.860676    4574 logs.go:276] 0 containers: []
	W0307 19:44:30.860689    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:30.860749    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:30.870987    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:30.871004    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:30.871009    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:30.882829    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:30.882840    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:30.895069    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:30.895080    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:30.910376    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:30.910394    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:30.928086    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:30.928101    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:30.939494    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:30.939504    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:30.954219    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:30.954234    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:30.959279    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:30.959287    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:31.004214    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:31.004230    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:31.018647    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:31.018661    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:31.031728    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:31.031742    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:31.050008    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:31.050018    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:31.074572    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:31.074579    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:33.609258    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:38.611412    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:38.611546    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:38.628947    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:38.629032    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:38.642107    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:38.642180    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:38.658249    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:38.658315    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:38.668979    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:38.669043    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:38.678735    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:38.678800    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:38.688847    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:38.688917    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:38.701755    4574 logs.go:276] 0 containers: []
	W0307 19:44:38.701766    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:38.701821    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:38.712967    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:38.712981    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:38.712987    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:38.724737    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:38.724748    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:38.758777    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:38.758788    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:38.763566    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:38.763580    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:38.778049    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:38.778059    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:38.792267    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:38.792280    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:38.806048    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:38.806060    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:38.817457    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:38.817470    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:38.852149    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:38.852162    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:38.863488    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:38.863498    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:38.877969    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:38.877978    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:38.904208    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:38.904222    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:38.923642    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:38.923653    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:41.449662    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:46.451727    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:46.451883    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:46.467290    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:46.467368    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:46.478323    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:46.478396    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:46.489410    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:46.489481    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:46.500049    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:46.500117    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:46.510646    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:46.510715    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:46.521074    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:46.521142    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:46.531615    4574 logs.go:276] 0 containers: []
	W0307 19:44:46.531625    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:46.531678    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:46.546489    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:46.546504    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:46.546510    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:46.582713    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:46.582728    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:46.587640    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:46.587648    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:46.600830    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:46.600845    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:46.615402    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:46.615411    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:46.633376    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:46.633389    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:46.657685    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:46.657697    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:46.669440    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:46.669452    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:46.709656    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:46.709668    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:46.724835    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:46.724846    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:46.738494    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:46.738506    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:46.750146    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:46.750156    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:46.762509    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:46.762524    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:49.276188    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:54.277085    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:54.277256    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:54.295870    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:54.295958    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:54.309970    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:54.310037    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:54.321866    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:54.321939    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:54.333115    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:54.333187    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:54.343697    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:54.343769    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:54.354159    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:54.354237    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:54.364250    4574 logs.go:276] 0 containers: []
	W0307 19:44:54.364260    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:54.364318    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:54.374346    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:54.374365    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:54.374370    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:54.392701    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:54.392712    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:54.404186    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:54.404197    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:54.408613    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:54.408621    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:54.425290    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:54.425301    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:54.443577    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:54.443590    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:54.455995    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:54.456008    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:54.470526    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:54.470537    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:54.485466    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:54.485476    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:54.510030    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:54.510044    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:54.545486    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:54.545498    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:54.582076    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:54.582091    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:54.605326    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:54.605339    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:57.119307    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:02.121466    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:02.121681    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:02.136089    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:02.136168    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:02.147577    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:02.147645    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:02.162713    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:02.162780    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:02.173133    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:02.173207    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:02.183126    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:02.183194    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:02.193898    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:02.193962    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:02.204049    4574 logs.go:276] 0 containers: []
	W0307 19:45:02.204061    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:02.204117    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:02.214321    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:02.214335    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:02.214340    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:02.225514    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:02.225524    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:02.237464    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:02.237476    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:02.271364    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:02.271375    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:02.275533    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:02.275542    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:02.293982    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:02.293993    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:02.307639    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:02.307651    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:02.318935    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:02.318946    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:02.342846    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:02.342855    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:02.354080    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:02.354094    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:02.390453    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:02.390467    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:02.410120    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:02.410133    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:02.425378    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:02.425388    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:04.944788    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:09.946931    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:09.947179    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:09.969703    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:09.969810    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:09.985591    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:09.985678    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:09.999151    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:09.999230    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:10.010609    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:10.010691    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:10.025617    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:10.025683    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:10.036673    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:10.036735    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:10.046721    4574 logs.go:276] 0 containers: []
	W0307 19:45:10.046735    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:10.046796    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:10.057587    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:10.057607    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:10.057613    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:10.074811    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:10.074825    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:10.079396    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:10.079404    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:10.112802    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:10.112817    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:10.127189    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:10.127198    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:10.138773    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:10.138783    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:10.150356    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:10.150367    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:10.165273    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:10.165284    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:10.177255    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:10.177268    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:10.211485    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:10.211493    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:10.234736    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:10.234746    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:10.249341    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:10.249354    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:10.263397    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:10.263410    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:10.275481    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:10.275492    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:10.291094    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:10.291106    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:12.807450    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:17.809582    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:17.809792    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:17.838187    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:17.838298    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:17.856040    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:17.856136    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:17.870065    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:17.871587    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:17.883299    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:17.883374    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:17.893995    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:17.894062    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:17.904399    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:17.904468    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:17.914842    4574 logs.go:276] 0 containers: []
	W0307 19:45:17.914853    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:17.914909    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:17.925316    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:17.925333    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:17.925339    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:17.939671    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:17.939685    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:17.951480    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:17.951492    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:17.955971    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:17.955977    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:17.998367    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:17.998380    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:18.011531    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:18.011545    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:18.026376    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:18.026385    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:18.038076    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:18.038085    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:18.060124    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:18.060135    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:18.072204    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:18.072217    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:18.083648    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:18.083661    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:18.118998    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:18.119007    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:18.133156    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:18.133167    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:18.144467    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:18.144478    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:18.160953    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:18.160964    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:20.689168    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:25.690195    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:25.690471    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:25.721688    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:25.721806    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:25.738574    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:25.738655    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:25.751676    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:25.751751    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:25.763474    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:25.763542    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:25.781017    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:25.781084    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:25.791503    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:25.791569    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:25.801202    4574 logs.go:276] 0 containers: []
	W0307 19:45:25.801213    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:25.801273    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:25.811462    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:25.811478    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:25.811483    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:25.847371    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:25.847385    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:25.861927    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:25.861938    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:25.873605    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:25.873614    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:25.887927    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:25.887938    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:25.899236    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:25.899247    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:25.911025    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:25.911037    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:25.922809    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:25.922823    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:25.934945    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:25.934956    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:25.957381    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:25.957390    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:25.982324    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:25.982332    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:26.017826    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:26.017834    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:26.022865    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:26.022874    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:26.034431    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:26.034440    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:26.049064    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:26.049075    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:28.561651    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:33.563783    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:33.563912    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:33.576442    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:33.576519    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:33.587344    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:33.587416    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:33.600493    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:33.600583    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:33.610989    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:33.611052    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:33.621575    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:33.621641    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:33.636042    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:33.636114    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:33.646265    4574 logs.go:276] 0 containers: []
	W0307 19:45:33.646278    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:33.646328    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:33.657341    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:33.657357    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:33.657363    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:33.692747    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:33.692760    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:33.710999    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:33.711011    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:33.726723    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:33.726737    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:33.731226    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:33.731232    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:33.746598    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:33.746609    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:33.759061    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:33.759071    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:33.776863    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:33.776877    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:33.792361    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:33.792370    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:33.815790    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:33.815799    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:33.826980    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:33.826992    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:33.843562    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:33.843574    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:33.857487    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:33.857500    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:33.868862    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:33.868875    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:33.904844    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:33.904863    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:36.418904    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:41.421284    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:41.421394    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:41.431988    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:41.432063    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:41.442843    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:41.442912    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:41.454001    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:41.454072    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:41.465479    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:41.465550    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:41.479694    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:41.479766    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:41.490399    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:41.490467    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:41.501122    4574 logs.go:276] 0 containers: []
	W0307 19:45:41.501135    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:41.501195    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:41.511608    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:41.511624    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:41.511629    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:41.523414    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:41.523428    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:41.536494    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:41.536507    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:41.572016    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:41.572026    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:41.576649    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:41.576655    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:41.588072    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:41.588085    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:41.608437    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:41.608450    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:41.620001    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:41.620010    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:41.643894    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:41.643904    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:41.658531    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:41.658542    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:41.692199    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:41.692210    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:41.708715    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:41.708728    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:41.720261    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:41.720273    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:41.733202    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:41.733213    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:41.751504    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:41.751519    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:44.270871    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:49.273268    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:49.273379    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:49.288238    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:49.288306    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:49.299572    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:49.299646    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:49.311677    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:49.311754    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:49.322530    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:49.322605    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:49.333567    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:49.333642    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:49.344961    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:49.345031    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:49.356239    4574 logs.go:276] 0 containers: []
	W0307 19:45:49.356253    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:49.356314    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:49.369183    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:49.369203    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:49.369209    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:49.405963    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:49.405979    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:49.443897    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:49.443908    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:49.462212    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:49.462226    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:49.476394    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:49.476407    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:49.488798    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:49.488811    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:49.504533    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:49.504549    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:49.518400    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:49.518412    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:49.530564    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:49.530575    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:49.550162    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:49.550175    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:49.569085    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:49.569102    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:49.582542    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:49.582554    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:49.595545    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:49.595557    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:49.600352    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:49.600361    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:49.612989    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:49.612999    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:52.140253    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:57.142420    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:57.142584    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:57.165946    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:57.166024    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:57.177326    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:57.177400    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:57.188522    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:57.188600    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:57.201350    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:57.201424    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:57.211647    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:57.211716    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:57.222586    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:57.222653    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:57.233315    4574 logs.go:276] 0 containers: []
	W0307 19:45:57.233325    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:57.233377    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:57.243761    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:57.243779    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:57.243784    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:57.258052    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:57.258065    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:57.275850    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:57.275865    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:57.288962    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:57.288973    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:57.303892    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:57.303903    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:57.338498    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:57.338510    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:57.342727    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:57.342733    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:57.357831    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:57.357845    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:57.369740    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:57.369752    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:57.381769    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:57.381780    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:57.405361    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:57.405369    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:57.419406    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:57.419417    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:57.431780    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:57.431790    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:57.443243    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:57.443252    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:57.478599    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:57.478610    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:59.992551    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:04.994654    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:04.994808    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:05.006350    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:46:05.006416    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:05.016484    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:46:05.016556    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:05.027656    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:46:05.027725    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:05.038243    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:46:05.038311    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:05.049003    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:46:05.049066    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:05.059520    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:46:05.059589    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:05.069840    4574 logs.go:276] 0 containers: []
	W0307 19:46:05.069851    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:05.069910    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:05.085238    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:46:05.085256    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:05.085261    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:05.120912    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:05.120926    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:05.125768    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:46:05.125775    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:46:05.137836    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:46:05.137848    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:46:05.150186    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:46:05.150196    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:46:05.161720    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:05.161731    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:05.195877    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:46:05.195888    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:46:05.211081    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:05.211091    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:05.237015    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:46:05.237025    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:05.249028    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:46:05.249037    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:46:05.264904    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:46:05.264921    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:46:05.277176    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:46:05.277187    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:46:05.292124    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:46:05.292136    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:46:05.305687    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:46:05.305701    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:46:05.320399    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:46:05.320408    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:46:07.846928    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:12.849037    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:12.849294    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:12.878507    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:46:12.878630    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:12.896944    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:46:12.897031    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:12.911022    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:46:12.911093    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:12.922705    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:46:12.922777    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:12.933351    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:46:12.933419    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:12.943945    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:46:12.944015    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:12.954328    4574 logs.go:276] 0 containers: []
	W0307 19:46:12.954339    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:12.954397    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:12.965569    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:46:12.965585    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:46:12.965591    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:46:12.978172    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:12.978182    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:12.982468    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:46:12.982478    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:46:12.996310    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:46:12.996322    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:46:13.008418    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:46:13.008429    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:46:13.023553    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:46:13.023564    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:46:13.035050    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:13.035060    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:13.060092    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:13.060104    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:13.095291    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:46:13.095302    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:46:13.109938    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:46:13.109949    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:46:13.122115    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:13.122124    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:13.157515    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:46:13.157544    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:46:13.169421    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:46:13.169434    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:46:13.186946    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:46:13.186960    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:46:13.200674    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:46:13.200691    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:15.714208    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:20.716670    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:20.716889    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:20.733268    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:46:20.733353    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:20.750609    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:46:20.750679    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:20.761613    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:46:20.761684    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:20.772630    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:46:20.772692    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:20.782869    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:46:20.782950    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:20.793232    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:46:20.793303    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:20.803009    4574 logs.go:276] 0 containers: []
	W0307 19:46:20.803019    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:20.803073    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:20.818218    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:46:20.818239    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:46:20.818245    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:46:20.829902    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:46:20.829916    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:46:20.842238    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:46:20.842252    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:46:20.856311    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:20.856324    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:20.893639    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:46:20.893653    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:46:20.908618    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:46:20.908629    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:46:20.926939    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:46:20.926949    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:46:20.938454    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:20.938468    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:20.961100    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:20.961108    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:20.965261    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:20.965267    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:21.000341    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:46:21.000351    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:46:21.012114    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:46:21.012124    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:46:21.026768    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:46:21.026780    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:46:21.038993    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:46:21.039003    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:21.050694    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:46:21.050707    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:46:23.566286    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:28.568320    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:28.568574    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:28.596961    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:46:28.597071    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:28.618680    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:46:28.618765    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:28.641482    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:46:28.641564    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:28.656877    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:46:28.656946    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:28.669879    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:46:28.669940    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:28.680232    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:46:28.680322    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:28.691093    4574 logs.go:276] 0 containers: []
	W0307 19:46:28.691107    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:28.691167    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:28.701281    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:46:28.701300    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:46:28.701305    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:28.718172    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:28.718184    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:28.753473    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:28.753485    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:28.758106    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:28.758116    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:28.795209    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:46:28.795220    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:46:28.809373    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:46:28.809387    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:46:28.823072    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:46:28.823083    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:46:28.835546    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:46:28.835559    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:46:28.853270    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:46:28.853281    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:46:28.865690    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:46:28.865700    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:46:28.879755    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:28.879768    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:28.907605    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:46:28.907616    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:46:28.919125    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:46:28.919136    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:46:28.931055    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:46:28.931064    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:46:28.945616    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:46:28.945626    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:46:31.461184    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:36.461371    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:36.461668    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:36.492556    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:46:36.492681    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:36.511796    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:46:36.511877    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:36.526031    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:46:36.526100    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:36.543629    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:46:36.543697    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:36.557904    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:46:36.557970    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:36.568454    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:46:36.568526    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:36.578665    4574 logs.go:276] 0 containers: []
	W0307 19:46:36.578682    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:36.578737    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:36.589635    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:46:36.589651    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:46:36.589657    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:46:36.601595    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:46:36.601605    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:36.614878    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:36.614890    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:36.620057    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:36.620066    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:36.654911    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:46:36.654921    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:46:36.667020    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:46:36.667030    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:46:36.679689    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:46:36.679708    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:46:36.694597    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:46:36.694611    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:46:36.707203    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:46:36.707213    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:46:36.718800    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:46:36.718814    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:46:36.730175    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:36.730185    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:36.752733    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:36.752742    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:36.786264    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:46:36.786272    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:46:36.801688    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:46:36.801698    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:46:36.815607    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:46:36.815620    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:46:39.335236    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:44.336263    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:44.336484    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:44.351578    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:46:44.351662    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:44.364262    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:46:44.364343    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:44.377354    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:46:44.377453    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:44.389893    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:46:44.389969    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:44.402217    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:46:44.402307    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:44.418663    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:46:44.418746    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:44.429291    4574 logs.go:276] 0 containers: []
	W0307 19:46:44.429303    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:44.429372    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:44.443159    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:46:44.443179    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:46:44.443185    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:46:44.454814    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:46:44.454826    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:46:44.468511    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:44.468523    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:44.473417    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:46:44.473425    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:46:44.485138    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:46:44.485151    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:46:44.497094    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:46:44.497106    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:44.509225    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:44.509237    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:44.544152    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:46:44.544166    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:46:44.558472    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:46:44.558484    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:46:44.570108    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:46:44.570118    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:46:44.587776    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:44.587787    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:44.610726    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:46:44.610737    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:46:44.625127    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:46:44.625138    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:46:44.637341    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:46:44.637353    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:46:44.652862    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:44.652875    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:47.190595    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:52.192789    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:52.198064    4574 out.go:177] 
	W0307 19:46:52.202260    4574 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0307 19:46:52.202270    4574 out.go:239] * 
	* 
	W0307 19:46:52.203114    4574 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:46:52.219041    4574 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-440000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-07 19:46:52.309431 -0800 PST m=+3073.659781292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-440000 -n running-upgrade-440000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-440000 -n running-upgrade-440000: exit status 2 (15.629998667s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-440000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-741000          | force-systemd-flag-741000 | jenkins | v1.32.0 | 07 Mar 24 19:36 PST |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-390000              | force-systemd-env-390000  | jenkins | v1.32.0 | 07 Mar 24 19:36 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-390000           | force-systemd-env-390000  | jenkins | v1.32.0 | 07 Mar 24 19:36 PST | 07 Mar 24 19:36 PST |
	| start   | -p docker-flags-034000                | docker-flags-034000       | jenkins | v1.32.0 | 07 Mar 24 19:36 PST |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-741000             | force-systemd-flag-741000 | jenkins | v1.32.0 | 07 Mar 24 19:36 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-741000          | force-systemd-flag-741000 | jenkins | v1.32.0 | 07 Mar 24 19:36 PST | 07 Mar 24 19:36 PST |
	| start   | -p cert-expiration-988000             | cert-expiration-988000    | jenkins | v1.32.0 | 07 Mar 24 19:36 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-034000 ssh               | docker-flags-034000       | jenkins | v1.32.0 | 07 Mar 24 19:36 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-034000 ssh               | docker-flags-034000       | jenkins | v1.32.0 | 07 Mar 24 19:36 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-034000                | docker-flags-034000       | jenkins | v1.32.0 | 07 Mar 24 19:36 PST | 07 Mar 24 19:36 PST |
	| start   | -p cert-options-168000                | cert-options-168000       | jenkins | v1.32.0 | 07 Mar 24 19:36 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-168000 ssh               | cert-options-168000       | jenkins | v1.32.0 | 07 Mar 24 19:36 PST |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-168000 -- sudo        | cert-options-168000       | jenkins | v1.32.0 | 07 Mar 24 19:36 PST |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-168000                | cert-options-168000       | jenkins | v1.32.0 | 07 Mar 24 19:36 PST | 07 Mar 24 19:36 PST |
	| start   | -p running-upgrade-440000             | minikube                  | jenkins | v1.26.0 | 07 Mar 24 19:36 PST | 07 Mar 24 19:38 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-440000             | running-upgrade-440000    | jenkins | v1.32.0 | 07 Mar 24 19:38 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-988000             | cert-expiration-988000    | jenkins | v1.32.0 | 07 Mar 24 19:39 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-988000             | cert-expiration-988000    | jenkins | v1.32.0 | 07 Mar 24 19:39 PST | 07 Mar 24 19:39 PST |
	| start   | -p kubernetes-upgrade-149000          | kubernetes-upgrade-149000 | jenkins | v1.32.0 | 07 Mar 24 19:39 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-149000          | kubernetes-upgrade-149000 | jenkins | v1.32.0 | 07 Mar 24 19:39 PST | 07 Mar 24 19:40 PST |
	| start   | -p kubernetes-upgrade-149000          | kubernetes-upgrade-149000 | jenkins | v1.32.0 | 07 Mar 24 19:40 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-149000          | kubernetes-upgrade-149000 | jenkins | v1.32.0 | 07 Mar 24 19:40 PST | 07 Mar 24 19:40 PST |
	| start   | -p stopped-upgrade-126000             | minikube                  | jenkins | v1.26.0 | 07 Mar 24 19:40 PST | 07 Mar 24 19:40 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-126000 stop           | minikube                  | jenkins | v1.26.0 | 07 Mar 24 19:40 PST | 07 Mar 24 19:41 PST |
	| start   | -p stopped-upgrade-126000             | stopped-upgrade-126000    | jenkins | v1.32.0 | 07 Mar 24 19:41 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 19:41:11
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 19:41:11.077647    4765 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:41:11.077797    4765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:41:11.077801    4765 out.go:304] Setting ErrFile to fd 2...
	I0307 19:41:11.077804    4765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:41:11.077974    4765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:41:11.079172    4765 out.go:298] Setting JSON to false
	I0307 19:41:11.098580    4765 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4243,"bootTime":1709865028,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:41:11.098648    4765 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:41:11.102250    4765 out.go:177] * [stopped-upgrade-126000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:41:11.110269    4765 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:41:11.115099    4765 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:41:11.110282    4765 notify.go:220] Checking for updates...
	I0307 19:41:11.121167    4765 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:41:11.124080    4765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:41:11.127120    4765 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:41:11.130171    4765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:41:11.133357    4765 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:41:11.137086    4765 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 19:41:11.140130    4765 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:41:11.143065    4765 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:41:11.150133    4765 start.go:297] selected driver: qemu2
	I0307 19:41:11.150138    4765 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50510 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 19:41:11.150186    4765 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:41:11.152644    4765 cni.go:84] Creating CNI manager for ""
	I0307 19:41:11.152662    4765 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:41:11.152692    4765 start.go:340] cluster config:
	{Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50510 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 19:41:11.152741    4765 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:41:11.160096    4765 out.go:177] * Starting "stopped-upgrade-126000" primary control-plane node in "stopped-upgrade-126000" cluster
	I0307 19:41:11.164136    4765 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 19:41:11.164152    4765 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0307 19:41:11.164162    4765 cache.go:56] Caching tarball of preloaded images
	I0307 19:41:11.164226    4765 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:41:11.164233    4765 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0307 19:41:11.164289    4765 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/config.json ...
	I0307 19:41:11.164764    4765 start.go:360] acquireMachinesLock for stopped-upgrade-126000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:41:11.164798    4765 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "stopped-upgrade-126000"
	I0307 19:41:11.164806    4765 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:41:11.164811    4765 fix.go:54] fixHost starting: 
	I0307 19:41:11.164920    4765 fix.go:112] recreateIfNeeded on stopped-upgrade-126000: state=Stopped err=<nil>
	W0307 19:41:11.164929    4765 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:41:11.173130    4765 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-126000" ...
	I0307 19:41:09.455691    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:09.455933    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:09.478796    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:09.478910    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:09.496394    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:09.496486    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:09.509347    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:09.509418    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:09.520154    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:09.520219    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:09.530599    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:09.530678    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:09.541540    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:09.541603    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:09.551779    4574 logs.go:276] 0 containers: []
	W0307 19:41:09.551789    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:09.551843    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:09.562046    4574 logs.go:276] 0 containers: []
	W0307 19:41:09.562056    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:09.562063    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:09.562069    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:09.573725    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:09.573735    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:09.590958    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:09.590969    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:09.628524    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:09.628532    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:09.640992    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:09.641002    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:09.654485    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:09.654496    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:09.668620    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:09.668635    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:09.680034    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:09.680044    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:09.684585    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:09.684594    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:09.699019    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:09.699029    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:09.710604    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:09.710616    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:09.746865    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:09.746879    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:09.761392    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:09.761403    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:09.778344    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:09.778356    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:09.803994    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:09.804000    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:12.319591    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:11.177147    4765 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50475-:22,hostfwd=tcp::50476-:2376,hostname=stopped-upgrade-126000 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/disk.qcow2
	I0307 19:41:11.223378    4765 main.go:141] libmachine: STDOUT: 
	I0307 19:41:11.223406    4765 main.go:141] libmachine: STDERR: 
	I0307 19:41:11.223414    4765 main.go:141] libmachine: Waiting for VM to start (ssh -p 50475 docker@127.0.0.1)...
	I0307 19:41:17.321907    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:17.322096    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:17.336262    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:17.336338    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:17.352591    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:17.352660    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:17.363583    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:17.363657    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:17.374193    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:17.374261    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:17.384757    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:17.384824    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:17.395295    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:17.395360    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:17.406169    4574 logs.go:276] 0 containers: []
	W0307 19:41:17.406180    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:17.406236    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:17.417319    4574 logs.go:276] 0 containers: []
	W0307 19:41:17.417331    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:17.417339    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:17.417345    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:17.428534    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:17.428546    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:17.454794    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:17.454806    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:17.460045    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:17.460052    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:17.474153    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:17.474164    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:17.489083    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:17.489095    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:17.501384    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:17.501396    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:17.539217    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:17.539225    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:17.573878    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:17.573893    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:17.588284    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:17.588293    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:17.602942    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:17.602951    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:17.621915    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:17.621927    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:17.633906    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:17.633917    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:17.646201    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:17.646214    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:17.669116    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:17.669127    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:20.195156    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:25.197230    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:25.197432    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:25.213908    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:25.213983    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:25.224411    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:25.224485    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:25.235054    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:25.235126    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:25.246591    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:25.246664    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:25.257601    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:25.257671    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:25.268200    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:25.268265    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:25.278478    4574 logs.go:276] 0 containers: []
	W0307 19:41:25.278488    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:25.278544    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:25.289031    4574 logs.go:276] 0 containers: []
	W0307 19:41:25.289289    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:25.289313    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:25.289412    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:25.330216    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:25.330231    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:25.334523    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:25.334530    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:25.352545    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:25.352555    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:25.377338    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:25.377347    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:25.391889    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:25.391903    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:25.408877    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:25.408892    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:25.428497    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:25.428516    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:25.440153    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:25.440165    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:25.475316    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:25.475328    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:25.489933    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:25.489943    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:25.503669    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:25.503681    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:25.523171    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:25.523182    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:25.535289    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:25.535301    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:25.547887    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:25.547898    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:28.062016    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:31.348388    4765 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/config.json ...
	I0307 19:41:31.348995    4765 machine.go:94] provisionDockerMachine start ...
	I0307 19:41:31.349086    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:31.349405    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:31.349417    4765 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 19:41:31.420617    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 19:41:31.420640    4765 buildroot.go:166] provisioning hostname "stopped-upgrade-126000"
	I0307 19:41:31.420724    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:31.420898    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:31.420907    4765 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-126000 && echo "stopped-upgrade-126000" | sudo tee /etc/hostname
	I0307 19:41:31.488053    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-126000
	
	I0307 19:41:31.488108    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:31.488217    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:31.488227    4765 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-126000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-126000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-126000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 19:41:31.550310    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 19:41:31.550327    4765 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18333-1199/.minikube CaCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18333-1199/.minikube}
	I0307 19:41:31.550342    4765 buildroot.go:174] setting up certificates
	I0307 19:41:31.550346    4765 provision.go:84] configureAuth start
	I0307 19:41:31.550350    4765 provision.go:143] copyHostCerts
	I0307 19:41:31.550420    4765 exec_runner.go:144] found /Users/jenkins/minikube-integration/18333-1199/.minikube/cert.pem, removing ...
	I0307 19:41:31.550428    4765 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18333-1199/.minikube/cert.pem
	I0307 19:41:31.550542    4765 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/cert.pem (1123 bytes)
	I0307 19:41:31.550705    4765 exec_runner.go:144] found /Users/jenkins/minikube-integration/18333-1199/.minikube/key.pem, removing ...
	I0307 19:41:31.550710    4765 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18333-1199/.minikube/key.pem
	I0307 19:41:31.550792    4765 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/key.pem (1675 bytes)
	I0307 19:41:31.550927    4765 exec_runner.go:144] found /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.pem, removing ...
	I0307 19:41:31.550932    4765 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.pem
	I0307 19:41:31.550989    4765 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.pem (1082 bytes)
	I0307 19:41:31.551076    4765 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-126000 san=[127.0.0.1 localhost minikube stopped-upgrade-126000]
	I0307 19:41:31.670070    4765 provision.go:177] copyRemoteCerts
	I0307 19:41:31.670099    4765 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 19:41:31.670107    4765 sshutil.go:53] new ssh client: &{IP:localhost Port:50475 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0307 19:41:31.699644    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 19:41:31.706513    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0307 19:41:31.713658    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 19:41:31.720262    4765 provision.go:87] duration metric: took 169.915375ms to configureAuth
	I0307 19:41:31.720270    4765 buildroot.go:189] setting minikube options for container-runtime
	I0307 19:41:31.720362    4765 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:41:31.720396    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:31.720476    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:31.720480    4765 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 19:41:31.778463    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 19:41:31.778470    4765 buildroot.go:70] root file system type: tmpfs
	I0307 19:41:31.778519    4765 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 19:41:31.778570    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:31.778670    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:31.778706    4765 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 19:41:31.843540    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 19:41:31.843597    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:31.843710    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:31.843718    4765 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 19:41:32.216660    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 19:41:32.216676    4765 machine.go:97] duration metric: took 867.707166ms to provisionDockerMachine
	I0307 19:41:32.216684    4765 start.go:293] postStartSetup for "stopped-upgrade-126000" (driver="qemu2")
	I0307 19:41:32.216691    4765 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 19:41:32.216760    4765 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 19:41:32.216770    4765 sshutil.go:53] new ssh client: &{IP:localhost Port:50475 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0307 19:41:32.249423    4765 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 19:41:32.250652    4765 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 19:41:32.250661    4765 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18333-1199/.minikube/addons for local assets ...
	I0307 19:41:32.250939    4765 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18333-1199/.minikube/files for local assets ...
	I0307 19:41:32.251054    4765 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem -> 16202.pem in /etc/ssl/certs
	I0307 19:41:32.251176    4765 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 19:41:32.254050    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem --> /etc/ssl/certs/16202.pem (1708 bytes)
	I0307 19:41:32.260880    4765 start.go:296] duration metric: took 44.192375ms for postStartSetup
	I0307 19:41:32.260895    4765 fix.go:56] duration metric: took 21.096966667s for fixHost
	I0307 19:41:32.260931    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:32.261072    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:32.261076    4765 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 19:41:32.319677    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709869292.044635212
	
	I0307 19:41:32.319684    4765 fix.go:216] guest clock: 1709869292.044635212
	I0307 19:41:32.319688    4765 fix.go:229] Guest: 2024-03-07 19:41:32.044635212 -0800 PST Remote: 2024-03-07 19:41:32.260897 -0800 PST m=+21.218306209 (delta=-216.261788ms)
	I0307 19:41:32.319702    4765 fix.go:200] guest clock delta is within tolerance: -216.261788ms
	I0307 19:41:32.319705    4765 start.go:83] releasing machines lock for "stopped-upgrade-126000", held for 21.155785833s
	I0307 19:41:32.319767    4765 ssh_runner.go:195] Run: cat /version.json
	I0307 19:41:32.319769    4765 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 19:41:32.319775    4765 sshutil.go:53] new ssh client: &{IP:localhost Port:50475 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0307 19:41:32.319785    4765 sshutil.go:53] new ssh client: &{IP:localhost Port:50475 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	W0307 19:41:32.320363    4765 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50475: connect: connection refused
	I0307 19:41:32.320387    4765 retry.go:31] will retry after 340.433681ms: dial tcp [::1]:50475: connect: connection refused
	W0307 19:41:32.350458    4765 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0307 19:41:32.350521    4765 ssh_runner.go:195] Run: systemctl --version
	I0307 19:41:32.352229    4765 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 19:41:32.353701    4765 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 19:41:32.353724    4765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0307 19:41:32.356404    4765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0307 19:41:32.360499    4765 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 19:41:32.360506    4765 start.go:494] detecting cgroup driver to use...
	I0307 19:41:32.360581    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 19:41:32.368307    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0307 19:41:32.371432    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 19:41:32.374744    4765 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 19:41:32.374768    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 19:41:32.378301    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 19:41:32.381824    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 19:41:32.385172    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 19:41:32.388312    4765 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 19:41:32.391235    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 19:41:32.394626    4765 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 19:41:32.397827    4765 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 19:41:32.400806    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:41:32.485009    4765 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 19:41:32.492771    4765 start.go:494] detecting cgroup driver to use...
	I0307 19:41:32.492843    4765 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 19:41:32.504533    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 19:41:32.512059    4765 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 19:41:32.517821    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 19:41:32.522535    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 19:41:32.527437    4765 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 19:41:32.566404    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 19:41:32.571691    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 19:41:32.576951    4765 ssh_runner.go:195] Run: which cri-dockerd
	I0307 19:41:32.578364    4765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 19:41:32.581400    4765 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 19:41:32.586259    4765 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 19:41:32.676114    4765 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 19:41:32.765927    4765 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 19:41:32.765990    4765 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 19:41:32.772147    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:41:32.853544    4765 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 19:41:34.007193    4765 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.153679459s)
	I0307 19:41:34.007248    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 19:41:34.012175    4765 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 19:41:34.018113    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 19:41:34.023136    4765 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 19:41:34.106094    4765 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 19:41:34.180924    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:41:34.258148    4765 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 19:41:34.264529    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 19:41:34.268758    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:41:34.341199    4765 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 19:41:34.380692    4765 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 19:41:34.380774    4765 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 19:41:34.382704    4765 start.go:562] Will wait 60s for crictl version
	I0307 19:41:34.382744    4765 ssh_runner.go:195] Run: which crictl
	I0307 19:41:34.384811    4765 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 19:41:34.399165    4765 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0307 19:41:34.399233    4765 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 19:41:34.415635    4765 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 19:41:34.437015    4765 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0307 19:41:34.437078    4765 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0307 19:41:34.438370    4765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 19:41:34.442305    4765 kubeadm.go:877] updating cluster {Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50510 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0307 19:41:34.442354    4765 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 19:41:34.442397    4765 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 19:41:34.453578    4765 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 19:41:34.453587    4765 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 19:41:34.453630    4765 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 19:41:34.457060    4765 ssh_runner.go:195] Run: which lz4
	I0307 19:41:34.458355    4765 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0307 19:41:34.459608    4765 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 19:41:34.459618    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0307 19:41:35.223254    4765 docker.go:649] duration metric: took 764.965666ms to copy over tarball
	I0307 19:41:35.223307    4765 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 19:41:33.064365    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:33.064522    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:33.080405    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:33.080492    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:33.093689    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:33.093759    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:33.104497    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:33.104563    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:33.114890    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:33.114963    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:33.125519    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:33.125585    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:33.136118    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:33.136184    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:33.146132    4574 logs.go:276] 0 containers: []
	W0307 19:41:33.146145    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:33.146210    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:33.156339    4574 logs.go:276] 0 containers: []
	W0307 19:41:33.156350    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:33.156358    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:33.156364    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:33.160645    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:33.160651    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:33.172593    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:33.172604    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:33.183354    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:33.183366    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:33.200305    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:33.200315    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:33.235892    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:33.235900    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:33.252790    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:33.252803    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:33.266564    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:33.266575    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:33.280931    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:33.280945    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:33.292506    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:33.292520    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:33.303819    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:33.303829    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:33.339125    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:33.339138    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:33.353142    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:33.353153    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:33.372632    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:33.372644    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:33.396379    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:33.396387    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:35.908632    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:36.408602    4765 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.185331708s)
	I0307 19:41:36.408615    4765 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 19:41:36.424179    4765 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 19:41:36.427107    4765 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0307 19:41:36.432308    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:41:36.512715    4765 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 19:41:38.025115    4765 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.512446s)
	I0307 19:41:38.025197    4765 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 19:41:38.038670    4765 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 19:41:38.038678    4765 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 19:41:38.038683    4765 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0307 19:41:38.045113    4765 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:41:38.045113    4765 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0307 19:41:38.045178    4765 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:41:38.045190    4765 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:41:38.045292    4765 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:41:38.045491    4765 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:41:38.045765    4765 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:41:38.045914    4765 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0307 19:41:38.054834    4765 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0307 19:41:38.054988    4765 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:41:38.055930    4765 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:41:38.056025    4765 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:41:38.055983    4765 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:41:38.056110    4765 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:41:38.056119    4765 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0307 19:41:38.056190    4765 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:41:40.020396    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0307 19:41:40.058577    4765 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0307 19:41:40.058626    4765 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0307 19:41:40.058722    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0307 19:41:40.079112    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0307 19:41:40.079264    4765 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0307 19:41:40.082446    4765 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0307 19:41:40.082464    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0307 19:41:40.090039    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:41:40.093108    4765 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0307 19:41:40.093120    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0307 19:41:40.102443    4765 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0307 19:41:40.102466    4765 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:41:40.102522    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:41:40.132333    4765 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0307 19:41:40.132381    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0307 19:41:40.135629    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:41:40.145047    4765 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0307 19:41:40.145067    4765 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:41:40.145116    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:41:40.149102    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:41:40.160079    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0307 19:41:40.160409    4765 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0307 19:41:40.160530    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:41:40.167976    4765 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0307 19:41:40.167996    4765 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:41:40.168046    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:41:40.170167    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:41:40.173707    4765 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0307 19:41:40.173726    4765 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:41:40.173767    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:41:40.174111    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0307 19:41:40.184959    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0307 19:41:40.193408    4765 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0307 19:41:40.193431    4765 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:41:40.193490    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:41:40.199070    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0307 19:41:40.199090    4765 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0307 19:41:40.199108    4765 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0307 19:41:40.199155    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0307 19:41:40.199175    4765 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0307 19:41:40.205413    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0307 19:41:40.205434    4765 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0307 19:41:40.205448    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0307 19:41:40.217088    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0307 19:41:40.217201    4765 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0307 19:41:40.218820    4765 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0307 19:41:40.218836    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0307 19:41:40.293988    4765 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0307 19:41:40.294003    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0307 19:41:40.436201    4765 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0307 19:41:40.438801    4765 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0307 19:41:40.438810    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0307 19:41:40.582198    4765 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0307 19:41:40.742276    4765 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0307 19:41:40.742487    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:41:40.765172    4765 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0307 19:41:40.765202    4765 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:41:40.765278    4765 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:41:40.785582    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0307 19:41:40.785714    4765 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0307 19:41:40.787420    4765 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0307 19:41:40.787435    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0307 19:41:40.813789    4765 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0307 19:41:40.813803    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0307 19:41:41.065161    4765 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0307 19:41:41.065194    4765 cache_images.go:92] duration metric: took 3.026602709s to LoadCachedImages
	W0307 19:41:41.065236    4765 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0307 19:41:41.065276    4765 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0307 19:41:41.065334    4765 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-126000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 19:41:41.065395    4765 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 19:41:40.910594    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:40.910695    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:40.932472    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:40.932547    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:40.944265    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:40.944336    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:40.954898    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:40.954967    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:40.966425    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:40.966490    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:40.977614    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:40.977678    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:40.988720    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:40.988789    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:40.999338    4574 logs.go:276] 0 containers: []
	W0307 19:41:40.999351    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:40.999408    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:41.010371    4574 logs.go:276] 0 containers: []
	W0307 19:41:41.010384    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:41.010391    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:41.010396    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:41.022751    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:41.022763    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:41.035168    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:41.035181    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:41.049503    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:41.049514    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:41.064620    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:41.064633    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:41.080168    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:41.080183    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:41.100505    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:41.100519    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:41.113298    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:41.113308    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:41.149897    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:41.149908    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:41.164546    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:41.164556    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:41.177287    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:41.177297    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:41.203789    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:41.203801    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:41.220903    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:41.220916    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:41.227116    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:41.227128    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:41.266205    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:41.266216    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:41.080217    4765 cni.go:84] Creating CNI manager for ""
	I0307 19:41:41.080495    4765 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:41:41.080507    4765 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 19:41:41.080516    4765 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-126000 NodeName:stopped-upgrade-126000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 19:41:41.080581    4765 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-126000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 19:41:41.080957    4765 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0307 19:41:41.083924    4765 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 19:41:41.083974    4765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 19:41:41.086744    4765 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0307 19:41:41.091857    4765 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 19:41:41.097042    4765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0307 19:41:41.103476    4765 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0307 19:41:41.104826    4765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 19:41:41.108649    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:41:41.192473    4765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 19:41:41.197796    4765 certs.go:68] Setting up /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000 for IP: 10.0.2.15
	I0307 19:41:41.197808    4765 certs.go:194] generating shared ca certs ...
	I0307 19:41:41.197816    4765 certs.go:226] acquiring lock for ca certs: {Name:mkeed6c4d5ba27d3ef2bc04c52c43819ca546cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:41:41.197965    4765 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.key
	I0307 19:41:41.198014    4765 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.key
	I0307 19:41:41.198019    4765 certs.go:256] generating profile certs ...
	I0307 19:41:41.198090    4765 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/client.key
	I0307 19:41:41.198108    4765 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522
	I0307 19:41:41.198120    4765 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0307 19:41:41.392227    4765 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522 ...
	I0307 19:41:41.392243    4765 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522: {Name:mkb5d7319d65594aa8434f1dd9aee32ab3bfe11a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:41:41.393623    4765 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522 ...
	I0307 19:41:41.393641    4765 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522: {Name:mk6e53ce5f1bfbe4a87d76c16cf03e10911c4d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:41:41.393787    4765 certs.go:381] copying /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.crt
	I0307 19:41:41.393919    4765 certs.go:385] copying /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.key
	I0307 19:41:41.394171    4765 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/proxy-client.key
	I0307 19:41:41.394292    4765 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/1620.pem (1338 bytes)
	W0307 19:41:41.394325    4765 certs.go:480] ignoring /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/1620_empty.pem, impossibly tiny 0 bytes
	I0307 19:41:41.394331    4765 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 19:41:41.394352    4765 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem (1082 bytes)
	I0307 19:41:41.394374    4765 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem (1123 bytes)
	I0307 19:41:41.394394    4765 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem (1675 bytes)
	I0307 19:41:41.394429    4765 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem (1708 bytes)
	I0307 19:41:41.394742    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 19:41:41.401871    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 19:41:41.409023    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 19:41:41.416483    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 19:41:41.423627    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0307 19:41:41.430343    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 19:41:41.438084    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 19:41:41.445024    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 19:41:41.451555    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/1620.pem --> /usr/share/ca-certificates/1620.pem (1338 bytes)
	I0307 19:41:41.458120    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem --> /usr/share/ca-certificates/16202.pem (1708 bytes)
	I0307 19:41:41.465916    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 19:41:41.474603    4765 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 19:41:41.480875    4765 ssh_runner.go:195] Run: openssl version
	I0307 19:41:41.483123    4765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 19:41:41.486575    4765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:41:41.488317    4765 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:57 /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:41:41.488356    4765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:41:41.490153    4765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 19:41:41.493488    4765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1620.pem && ln -fs /usr/share/ca-certificates/1620.pem /etc/ssl/certs/1620.pem"
	I0307 19:41:41.496679    4765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1620.pem
	I0307 19:41:41.497995    4765 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:04 /usr/share/ca-certificates/1620.pem
	I0307 19:41:41.498015    4765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1620.pem
	I0307 19:41:41.499778    4765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1620.pem /etc/ssl/certs/51391683.0"
	I0307 19:41:41.502681    4765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16202.pem && ln -fs /usr/share/ca-certificates/16202.pem /etc/ssl/certs/16202.pem"
	I0307 19:41:41.506068    4765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16202.pem
	I0307 19:41:41.507540    4765 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:04 /usr/share/ca-certificates/16202.pem
	I0307 19:41:41.507563    4765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16202.pem
	I0307 19:41:41.509311    4765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16202.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 19:41:41.512164    4765 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 19:41:41.513621    4765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 19:41:41.516214    4765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 19:41:41.518051    4765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 19:41:41.519999    4765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 19:41:41.521697    4765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 19:41:41.523403    4765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 19:41:41.525096    4765 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50510 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 19:41:41.525158    4765 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 19:41:41.535479    4765 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 19:41:41.538534    4765 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 19:41:41.538540    4765 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 19:41:41.538543    4765 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 19:41:41.538565    4765 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 19:41:41.541892    4765 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 19:41:41.542190    4765 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-126000" does not appear in /Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:41:41.542280    4765 kubeconfig.go:62] /Users/jenkins/minikube-integration/18333-1199/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-126000" cluster setting kubeconfig missing "stopped-upgrade-126000" context setting]
	I0307 19:41:41.542473    4765 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/kubeconfig: {Name:mk4dcca67acc40e2ef9a6fcc3838689fa74c4a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:41:41.542943    4765 kapi.go:59] client config for stopped-upgrade-126000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/client.key", CAFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1037a76a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 19:41:41.543254    4765 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 19:41:41.546053    4765 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-126000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0307 19:41:41.546058    4765 kubeadm.go:1153] stopping kube-system containers ...
	I0307 19:41:41.546096    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 19:41:41.556623    4765 docker.go:483] Stopping containers: [3b6448caa1dd 5edfe0ffe4cd 6572e576175a 31a1ca5c904b 095cdd1dff64 ab5e0688264a ad5a1b9317e8 3b2ae43e4bd5]
	I0307 19:41:41.556688    4765 ssh_runner.go:195] Run: docker stop 3b6448caa1dd 5edfe0ffe4cd 6572e576175a 31a1ca5c904b 095cdd1dff64 ab5e0688264a ad5a1b9317e8 3b2ae43e4bd5
	I0307 19:41:41.567749    4765 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 19:41:41.573257    4765 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 19:41:41.575874    4765 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 19:41:41.575879    4765 kubeadm.go:156] found existing configuration files:
	
	I0307 19:41:41.575901    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/admin.conf
	I0307 19:41:41.578470    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 19:41:41.578495    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 19:41:41.581528    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/kubelet.conf
	I0307 19:41:41.584316    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 19:41:41.584344    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 19:41:41.587061    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/controller-manager.conf
	I0307 19:41:41.589932    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 19:41:41.589955    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 19:41:41.592994    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/scheduler.conf
	I0307 19:41:41.595416    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 19:41:41.595434    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 19:41:41.598139    4765 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 19:41:41.601177    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:41:41.621697    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:41:42.009916    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:41:42.133088    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:41:42.155455    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:41:42.173706    4765 api_server.go:52] waiting for apiserver process to appear ...
	I0307 19:41:42.173801    4765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:41:42.675985    4765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:41:43.175832    4765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:41:43.179782    4765 api_server.go:72] duration metric: took 1.006120917s to wait for apiserver process to appear ...
	I0307 19:41:43.179790    4765 api_server.go:88] waiting for apiserver healthz status ...
	I0307 19:41:43.179799    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:43.782725    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:48.180154    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:48.180191    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:48.784481    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:48.784665    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:48.808486    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:48.808578    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:48.822194    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:48.822273    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:48.838017    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:48.838112    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:48.848251    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:48.848324    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:48.858772    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:48.858833    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:48.868818    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:48.868886    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:48.879284    4574 logs.go:276] 0 containers: []
	W0307 19:41:48.879299    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:48.879357    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:48.889796    4574 logs.go:276] 0 containers: []
	W0307 19:41:48.889807    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:48.889815    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:48.889820    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:48.903600    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:48.903611    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:48.920728    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:48.920739    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:48.933767    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:48.933781    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:48.948262    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:48.948273    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:48.972165    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:48.972172    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:49.006301    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:49.006312    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:49.020328    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:49.020338    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:49.033199    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:49.033213    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:49.047925    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:49.047937    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:49.060615    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:49.060626    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:49.071940    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:49.071950    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:49.109731    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:49.109742    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:49.114680    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:49.114685    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:49.126068    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:49.126080    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:51.642285    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:53.181520    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:53.181563    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:56.644382    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:56.644495    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:41:56.661717    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:41:56.661795    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:41:56.683549    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:41:56.683614    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:41:56.700590    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:41:56.700657    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:41:56.712138    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:41:56.712208    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:41:56.722228    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:41:56.722294    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:41:56.732707    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:41:56.732779    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:41:56.742655    4574 logs.go:276] 0 containers: []
	W0307 19:41:56.742666    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:41:56.742726    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:41:56.752743    4574 logs.go:276] 0 containers: []
	W0307 19:41:56.752753    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:41:56.752761    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:41:56.752767    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:41:56.767017    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:41:56.767033    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:41:56.778616    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:41:56.778630    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:41:56.790442    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:41:56.790455    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:41:56.827951    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:41:56.827959    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:41:56.841916    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:41:56.841928    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:41:56.854959    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:41:56.854969    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:41:56.876703    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:41:56.876712    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:41:56.894637    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:41:56.894648    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:41:56.928751    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:41:56.928762    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:41:56.941165    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:41:56.941176    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:41:56.952904    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:41:56.952917    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:41:56.967416    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:41:56.967426    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:41:56.982541    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:41:56.982551    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:41:57.005592    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:41:57.005601    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:41:58.181729    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:58.181771    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:59.512206    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:03.182010    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:03.182105    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:04.514216    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:04.514327    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:04.528464    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:42:04.528562    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:04.540210    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:42:04.540273    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:04.550272    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:42:04.550341    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:04.566085    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:42:04.566160    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:04.577296    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:42:04.577364    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:04.588084    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:42:04.588155    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:04.597928    4574 logs.go:276] 0 containers: []
	W0307 19:42:04.597940    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:04.597995    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:04.608718    4574 logs.go:276] 0 containers: []
	W0307 19:42:04.608731    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:42:04.608739    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:42:04.608746    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:42:04.624251    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:42:04.624267    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:42:04.636361    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:04.636377    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:04.641027    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:42:04.641034    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:42:04.655514    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:42:04.655523    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:42:04.675051    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:42:04.675063    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:42:04.689287    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:42:04.689303    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:42:04.707285    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:42:04.707296    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:04.718562    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:04.718572    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:04.755226    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:04.755235    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:04.790481    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:04.790494    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:04.813511    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:42:04.813519    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:42:04.831430    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:42:04.831441    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:42:04.842676    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:42:04.842687    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:42:04.855612    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:42:04.855625    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:42:07.369277    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:08.182765    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:08.182809    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:12.371433    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:12.371571    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:12.387623    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:42:12.387710    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:12.399786    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:42:12.399861    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:12.410839    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:42:12.410913    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:12.421207    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:42:12.421275    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:12.431848    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:42:12.431923    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:12.444990    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:42:12.445053    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:12.454848    4574 logs.go:276] 0 containers: []
	W0307 19:42:12.454857    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:12.454911    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:12.465069    4574 logs.go:276] 0 containers: []
	W0307 19:42:12.465082    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:42:12.465090    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:12.465096    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:12.501846    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:42:12.501858    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:42:12.515801    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:42:12.515812    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:42:12.526946    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:12.526957    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:12.531501    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:42:12.531508    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:42:12.542909    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:42:12.542920    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:42:12.562582    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:42:12.562596    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:42:12.574780    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:42:12.574792    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:42:12.590245    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:12.590255    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:12.624096    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:42:12.624112    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:42:12.638181    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:42:12.638191    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:42:12.649398    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:42:12.649409    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:42:12.663343    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:42:12.663354    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:42:12.677596    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:12.677605    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:12.703063    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:42:12.703080    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:13.183430    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:13.183495    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:15.217633    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:18.184277    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:18.184325    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:20.219180    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:20.219379    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:20.237098    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:42:20.237186    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:20.250058    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:42:20.250133    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:20.261033    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:42:20.261103    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:20.271434    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:42:20.271514    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:20.281717    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:42:20.281783    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:20.297170    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:42:20.297236    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:20.307314    4574 logs.go:276] 0 containers: []
	W0307 19:42:20.307325    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:20.307382    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:20.317793    4574 logs.go:276] 0 containers: []
	W0307 19:42:20.317805    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:42:20.317814    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:20.317820    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:20.341178    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:42:20.341188    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:42:20.356475    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:42:20.356486    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:42:20.370793    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:42:20.370803    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:42:20.382467    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:20.382479    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:20.416424    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:20.416435    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:20.421071    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:42:20.421079    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:42:20.435227    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:42:20.435237    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:42:20.446797    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:42:20.446808    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:42:20.460874    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:42:20.460888    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:42:20.476303    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:20.476314    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:20.512165    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:42:20.512175    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:42:20.526216    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:42:20.526228    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:42:20.543897    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:42:20.543907    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:20.555880    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:42:20.555893    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:42:23.185481    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:23.185562    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:23.071640    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:28.186452    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:28.186467    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:28.074140    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:28.074384    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:28.094772    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:42:28.094874    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:28.109367    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:42:28.109440    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:28.124742    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:42:28.124826    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:28.137143    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:42:28.137224    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:28.149633    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:42:28.149699    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:28.160585    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:42:28.160649    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:28.170283    4574 logs.go:276] 0 containers: []
	W0307 19:42:28.170297    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:28.170346    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:28.180799    4574 logs.go:276] 0 containers: []
	W0307 19:42:28.180811    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:42:28.180819    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:28.180825    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:28.218720    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:42:28.218728    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:42:28.230235    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:42:28.230252    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:42:28.242113    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:28.242126    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:28.246721    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:28.246726    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:28.280701    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:42:28.280712    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:42:28.295376    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:42:28.295386    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:42:28.309676    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:42:28.309688    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:28.322186    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:42:28.322200    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:42:28.337081    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:42:28.337095    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:42:28.351371    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:42:28.351383    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:42:28.363467    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:42:28.363482    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:42:28.380627    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:28.380638    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:28.404435    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:42:28.404445    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:42:28.417376    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:42:28.417387    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:42:30.938116    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:33.188167    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:33.188274    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:35.940264    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:35.940411    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:35.954157    4574 logs.go:276] 2 containers: [b9200fdfc8fd 1ba821cda6b5]
	I0307 19:42:35.954232    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:35.966018    4574 logs.go:276] 2 containers: [15368389bb09 bac00c8cd148]
	I0307 19:42:35.966078    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:35.977842    4574 logs.go:276] 1 containers: [fbbcbd5dc003]
	I0307 19:42:35.977910    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:35.988709    4574 logs.go:276] 2 containers: [28e50607b99a e77fdd625530]
	I0307 19:42:35.988781    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:35.999265    4574 logs.go:276] 1 containers: [a9dce041ac78]
	I0307 19:42:35.999332    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:36.009750    4574 logs.go:276] 2 containers: [26858d0321ef e775b0f452d6]
	I0307 19:42:36.009821    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:36.020528    4574 logs.go:276] 0 containers: []
	W0307 19:42:36.020538    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:36.020592    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:36.030565    4574 logs.go:276] 0 containers: []
	W0307 19:42:36.030575    4574 logs.go:278] No container was found matching "storage-provisioner"
	I0307 19:42:36.030583    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:42:36.030591    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:36.043640    4574 logs.go:123] Gathering logs for kube-scheduler [28e50607b99a] ...
	I0307 19:42:36.043649    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e50607b99a"
	I0307 19:42:36.057415    4574 logs.go:123] Gathering logs for kube-proxy [a9dce041ac78] ...
	I0307 19:42:36.057424    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9dce041ac78"
	I0307 19:42:36.072764    4574 logs.go:123] Gathering logs for kube-apiserver [b9200fdfc8fd] ...
	I0307 19:42:36.072773    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9200fdfc8fd"
	I0307 19:42:36.086854    4574 logs.go:123] Gathering logs for etcd [15368389bb09] ...
	I0307 19:42:36.086863    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15368389bb09"
	I0307 19:42:36.100874    4574 logs.go:123] Gathering logs for etcd [bac00c8cd148] ...
	I0307 19:42:36.100882    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bac00c8cd148"
	I0307 19:42:36.115705    4574 logs.go:123] Gathering logs for coredns [fbbcbd5dc003] ...
	I0307 19:42:36.115717    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbbcbd5dc003"
	I0307 19:42:36.127165    4574 logs.go:123] Gathering logs for kube-controller-manager [26858d0321ef] ...
	I0307 19:42:36.127176    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26858d0321ef"
	I0307 19:42:36.144690    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:36.144701    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:36.167341    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:36.167348    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:36.202255    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:36.202262    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:36.206490    4574 logs.go:123] Gathering logs for kube-apiserver [1ba821cda6b5] ...
	I0307 19:42:36.206496    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba821cda6b5"
	I0307 19:42:36.218603    4574 logs.go:123] Gathering logs for kube-scheduler [e77fdd625530] ...
	I0307 19:42:36.218616    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e77fdd625530"
	I0307 19:42:36.233176    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:36.233187    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:36.267600    4574 logs.go:123] Gathering logs for kube-controller-manager [e775b0f452d6] ...
	I0307 19:42:36.267611    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e775b0f452d6"
	I0307 19:42:38.190701    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:38.190779    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:38.780495    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:43.782306    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:43.782377    4574 kubeadm.go:591] duration metric: took 4m3.170327917s to restartPrimaryControlPlane
	W0307 19:42:43.782421    4574 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0307 19:42:43.782439    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0307 19:42:44.659504    4574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 19:42:44.664413    4574 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 19:42:44.667118    4574 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 19:42:44.669699    4574 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 19:42:44.669705    4574 kubeadm.go:156] found existing configuration files:
	
	I0307 19:42:44.669725    4574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/admin.conf
	I0307 19:42:44.672342    4574 kubeadm.go:162] "https://control-plane.minikube.internal:50311" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 19:42:44.672364    4574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 19:42:44.675000    4574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/kubelet.conf
	I0307 19:42:44.677405    4574 kubeadm.go:162] "https://control-plane.minikube.internal:50311" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 19:42:44.677424    4574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 19:42:44.680478    4574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/controller-manager.conf
	I0307 19:42:44.683355    4574 kubeadm.go:162] "https://control-plane.minikube.internal:50311" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 19:42:44.683377    4574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 19:42:44.685821    4574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/scheduler.conf
	I0307 19:42:44.688778    4574 kubeadm.go:162] "https://control-plane.minikube.internal:50311" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50311 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 19:42:44.688797    4574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 19:42:44.691892    4574 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 19:42:44.708882    4574 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0307 19:42:44.708978    4574 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 19:42:44.753311    4574 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 19:42:44.753368    4574 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 19:42:44.753413    4574 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 19:42:44.802435    4574 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 19:42:44.805694    4574 out.go:204]   - Generating certificates and keys ...
	I0307 19:42:44.805733    4574 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 19:42:44.805766    4574 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 19:42:44.805812    4574 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 19:42:44.805851    4574 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0307 19:42:44.805890    4574 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 19:42:44.805919    4574 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0307 19:42:44.805954    4574 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0307 19:42:44.805996    4574 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0307 19:42:44.806032    4574 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 19:42:44.806083    4574 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 19:42:44.806104    4574 kubeadm.go:309] [certs] Using the existing "sa" key
	I0307 19:42:44.806133    4574 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 19:42:44.893983    4574 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 19:42:45.048357    4574 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 19:42:45.165723    4574 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 19:42:45.257355    4574 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 19:42:45.287974    4574 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 19:42:45.288366    4574 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 19:42:45.288441    4574 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 19:42:45.378136    4574 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 19:42:43.191371    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:43.191761    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:43.224995    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:42:43.225130    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:43.244528    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:42:43.244616    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:43.266404    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:42:43.266481    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:43.278052    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:42:43.278116    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:43.288540    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:42:43.288614    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:43.299360    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:42:43.299431    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:43.309748    4765 logs.go:276] 0 containers: []
	W0307 19:42:43.309763    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:43.309816    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:43.320260    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:42:43.320284    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:42:43.320290    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:42:43.338546    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:43.338558    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:43.364876    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:42:43.364886    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:42:43.380416    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:42:43.380435    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:42:43.392015    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:42:43.392028    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:42:43.408263    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:42:43.408274    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:43.420328    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:43.420347    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:43.424778    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:43.424787    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:43.531312    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:42:43.531327    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:42:43.545251    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:42:43.545264    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:42:43.556944    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:43.556954    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:43.595166    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:42:43.595176    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:42:43.609289    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:42:43.609301    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:42:43.650935    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:42:43.650960    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:42:43.662386    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:42:43.662399    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:42:43.677630    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:42:43.677642    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:42:45.382245    4574 out.go:204]   - Booting up control plane ...
	I0307 19:42:45.382294    4574 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 19:42:45.382332    4574 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 19:42:45.382362    4574 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 19:42:45.382424    4574 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 19:42:45.382504    4574 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 19:42:49.885089    4574 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.506196 seconds
	I0307 19:42:49.885216    4574 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 19:42:49.891887    4574 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 19:42:50.404481    4574 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 19:42:50.404731    4574 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-440000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 19:42:50.915045    4574 kubeadm.go:309] [bootstrap-token] Using token: 4y3wpr.20ebbqchoj7k77el
	I0307 19:42:46.191385    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:50.919087    4574 out.go:204]   - Configuring RBAC rules ...
	I0307 19:42:50.919207    4574 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 19:42:50.919281    4574 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 19:42:50.925103    4574 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 19:42:50.926392    4574 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 19:42:50.927556    4574 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 19:42:50.928773    4574 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 19:42:50.932708    4574 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 19:42:51.117061    4574 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 19:42:51.321040    4574 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 19:42:51.321540    4574 kubeadm.go:309] 
	I0307 19:42:51.321644    4574 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 19:42:51.321657    4574 kubeadm.go:309] 
	I0307 19:42:51.321792    4574 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 19:42:51.321805    4574 kubeadm.go:309] 
	I0307 19:42:51.321849    4574 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 19:42:51.321933    4574 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 19:42:51.322024    4574 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 19:42:51.322035    4574 kubeadm.go:309] 
	I0307 19:42:51.322128    4574 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 19:42:51.322195    4574 kubeadm.go:309] 
	I0307 19:42:51.322231    4574 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 19:42:51.322234    4574 kubeadm.go:309] 
	I0307 19:42:51.322258    4574 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 19:42:51.322293    4574 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 19:42:51.322361    4574 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 19:42:51.322368    4574 kubeadm.go:309] 
	I0307 19:42:51.322421    4574 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 19:42:51.322467    4574 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 19:42:51.322470    4574 kubeadm.go:309] 
	I0307 19:42:51.322513    4574 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4y3wpr.20ebbqchoj7k77el \
	I0307 19:42:51.322566    4574 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8b467e6a6e778a0a4b26f2d605aedb7acbb7b86477eb1b7d8cc5affd5ffec0d5 \
	I0307 19:42:51.322578    4574 kubeadm.go:309] 	--control-plane 
	I0307 19:42:51.322586    4574 kubeadm.go:309] 
	I0307 19:42:51.322632    4574 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 19:42:51.322638    4574 kubeadm.go:309] 
	I0307 19:42:51.322677    4574 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4y3wpr.20ebbqchoj7k77el \
	I0307 19:42:51.322731    4574 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8b467e6a6e778a0a4b26f2d605aedb7acbb7b86477eb1b7d8cc5affd5ffec0d5 
	I0307 19:42:51.322804    4574 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 19:42:51.322817    4574 cni.go:84] Creating CNI manager for ""
	I0307 19:42:51.322825    4574 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:42:51.330476    4574 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 19:42:51.334578    4574 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 19:42:51.338036    4574 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 19:42:51.345290    4574 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 19:42:51.345345    4574 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-440000 minikube.k8s.io/updated_at=2024_03_07T19_42_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=running-upgrade-440000 minikube.k8s.io/primary=true
	I0307 19:42:51.345380    4574 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:42:51.388305    4574 ops.go:34] apiserver oom_adj: -16
	I0307 19:42:51.388321    4574 kubeadm.go:1106] duration metric: took 43.0485ms to wait for elevateKubeSystemPrivileges
	W0307 19:42:51.388373    4574 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 19:42:51.388379    4574 kubeadm.go:393] duration metric: took 4m10.789570791s to StartCluster
	I0307 19:42:51.388388    4574 settings.go:142] acquiring lock: {Name:mka91134012bc21ec54a241fdaa124189f2aed4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:42:51.388472    4574 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:42:51.388863    4574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/kubeconfig: {Name:mk4dcca67acc40e2ef9a6fcc3838689fa74c4a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:42:51.389068    4574 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:42:51.392506    4574 out.go:177] * Verifying Kubernetes components...
	I0307 19:42:51.389164    4574 config.go:182] Loaded profile config "running-upgrade-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:42:51.389133    4574 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 19:42:51.399471    4574 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-440000"
	I0307 19:42:51.399484    4574 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-440000"
	W0307 19:42:51.399500    4574 addons.go:243] addon storage-provisioner should already be in state true
	I0307 19:42:51.399510    4574 host.go:66] Checking if "running-upgrade-440000" exists ...
	I0307 19:42:51.399525    4574 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-440000"
	I0307 19:42:51.399540    4574 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-440000"
	I0307 19:42:51.399568    4574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:42:51.400985    4574 kapi.go:59] client config for running-upgrade-440000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/running-upgrade-440000/client.key", CAFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c576a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 19:42:51.401114    4574 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-440000"
	W0307 19:42:51.401119    4574 addons.go:243] addon default-storageclass should already be in state true
	I0307 19:42:51.401126    4574 host.go:66] Checking if "running-upgrade-440000" exists ...
	I0307 19:42:51.406508    4574 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:42:51.410467    4574 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:42:51.410473    4574 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 19:42:51.410481    4574 sshutil.go:53] new ssh client: &{IP:localhost Port:50279 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/running-upgrade-440000/id_rsa Username:docker}
	I0307 19:42:51.411350    4574 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 19:42:51.411355    4574 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 19:42:51.411359    4574 sshutil.go:53] new ssh client: &{IP:localhost Port:50279 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/running-upgrade-440000/id_rsa Username:docker}
	I0307 19:42:51.499011    4574 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 19:42:51.504442    4574 api_server.go:52] waiting for apiserver process to appear ...
	I0307 19:42:51.504495    4574 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:42:51.509588    4574 api_server.go:72] duration metric: took 120.504375ms to wait for apiserver process to appear ...
	I0307 19:42:51.509620    4574 api_server.go:88] waiting for apiserver healthz status ...
	I0307 19:42:51.509635    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:51.527203    4574 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 19:42:51.530016    4574 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:42:51.193339    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:51.193442    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:51.205951    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:42:51.206026    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:51.217920    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:42:51.217996    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:51.232037    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:42:51.232120    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:51.242620    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:42:51.242681    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:51.257251    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:42:51.257311    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:51.267800    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:42:51.267878    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:51.279030    4765 logs.go:276] 0 containers: []
	W0307 19:42:51.279044    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:51.279109    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:51.294488    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:42:51.294507    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:51.294513    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:51.298621    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:51.298629    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:51.336607    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:42:51.336616    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:42:51.355518    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:51.355538    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:51.381296    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:51.381314    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:51.422368    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:42:51.422379    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:42:51.436671    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:42:51.436681    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:42:51.448491    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:42:51.448504    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:42:51.466701    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:42:51.466715    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:42:51.481754    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:42:51.481764    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:42:51.493510    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:42:51.493524    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:51.507239    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:42:51.507250    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:42:51.547033    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:42:51.547049    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:42:51.562443    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:42:51.562466    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:42:51.578399    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:42:51.578413    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:42:51.601439    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:42:51.601453    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:42:54.120113    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:56.511575    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:56.511611    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:59.122445    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:59.122814    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:59.154028    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:42:59.154165    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:59.174276    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:42:59.174374    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:59.188294    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:42:59.188364    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:59.200141    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:42:59.200215    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:59.215724    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:42:59.215801    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:59.226462    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:42:59.226535    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:59.236865    4765 logs.go:276] 0 containers: []
	W0307 19:42:59.236877    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:59.236938    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:59.247392    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:42:59.247410    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:59.247415    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:59.283781    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:42:59.283789    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:42:59.299197    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:42:59.299207    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:42:59.311619    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:42:59.311628    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:42:59.323545    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:59.323554    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:59.349794    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:59.349806    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:59.354019    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:59.354027    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:59.391006    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:42:59.391018    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:42:59.429374    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:42:59.429386    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:42:59.443558    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:42:59.443569    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:42:59.460124    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:42:59.460139    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:42:59.475055    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:42:59.475070    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:59.488673    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:42:59.488686    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:42:59.503830    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:42:59.503841    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:42:59.519801    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:42:59.519812    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:42:59.535205    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:42:59.535216    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:01.511690    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:01.511711    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:02.054684    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:06.511829    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:06.511872    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:07.056720    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:07.056893    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:07.071116    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:07.071199    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:07.082437    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:07.082495    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:07.092782    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:07.092845    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:07.102991    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:07.103066    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:07.113435    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:07.113494    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:07.124004    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:07.124078    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:07.134118    4765 logs.go:276] 0 containers: []
	W0307 19:43:07.134128    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:07.134186    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:07.148550    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:07.148567    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:07.148573    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:07.167046    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:07.167060    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:07.178813    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:07.178823    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:07.203610    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:07.203623    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:07.215670    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:07.215685    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:07.232474    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:07.232484    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:07.243997    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:07.244008    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:07.259602    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:07.259613    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:07.296975    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:07.296990    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:07.300803    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:07.300811    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:07.315139    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:07.315151    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:07.326131    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:07.326143    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:07.341246    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:07.341257    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:07.353383    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:07.353395    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:07.368718    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:07.368729    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:07.408497    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:07.408508    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:09.949607    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:11.512111    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:11.512151    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:14.952089    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:14.952339    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:14.980096    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:14.980223    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:14.997146    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:14.997260    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:15.010589    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:15.010663    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:15.021703    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:15.021772    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:15.032145    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:15.032210    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:15.042986    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:15.043055    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:15.056464    4765 logs.go:276] 0 containers: []
	W0307 19:43:15.056477    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:15.056540    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:15.071150    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:15.071168    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:15.071174    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:15.107607    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:15.107619    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:15.122482    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:15.122493    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:15.147301    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:15.147310    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:15.158653    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:15.158662    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:15.196533    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:15.196544    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:15.200674    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:15.200681    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:15.216465    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:15.216477    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:15.234654    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:15.234664    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:15.248816    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:15.248830    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:15.259988    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:15.259999    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:15.273953    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:15.273964    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:15.285591    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:15.285604    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:15.326929    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:15.326944    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:15.340824    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:15.340836    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:15.355687    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:15.355701    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:16.512537    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:16.512592    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:17.869240    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:21.513108    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:21.513164    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0307 19:43:21.897253    4574 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0307 19:43:21.901229    4574 out.go:177] * Enabled addons: storage-provisioner
	I0307 19:43:21.912108    4574 addons.go:505] duration metric: took 30.524271167s for enable addons: enabled=[storage-provisioner]
	I0307 19:43:22.871361    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:22.871668    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:22.904766    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:22.904894    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:22.926924    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:22.927021    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:22.940668    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:22.940748    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:22.952541    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:22.952610    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:22.963276    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:22.963347    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:22.974158    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:22.974225    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:22.984768    4765 logs.go:276] 0 containers: []
	W0307 19:43:22.984780    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:22.984835    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:22.995352    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:22.995367    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:22.995374    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:22.999968    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:22.999976    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:23.036638    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:23.036649    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:23.051216    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:23.051227    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:23.065804    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:23.065816    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:23.104525    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:23.104536    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:23.116665    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:23.116679    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:23.128033    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:23.128047    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:23.151609    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:23.151618    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:23.188459    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:23.188470    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:23.207569    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:23.207586    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:23.220194    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:23.220209    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:23.238655    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:23.238665    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:23.251275    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:23.251288    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:23.266220    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:23.266235    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:23.277943    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:23.277958    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:25.795004    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:26.513900    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:26.513936    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:30.797227    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:30.797413    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:30.813425    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:30.813511    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:30.827152    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:30.827228    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:30.838067    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:30.838142    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:30.848808    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:30.848870    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:30.858594    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:30.858660    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:30.869159    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:30.869223    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:30.879560    4765 logs.go:276] 0 containers: []
	W0307 19:43:30.879573    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:30.879630    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:30.890153    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:30.890171    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:30.890176    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:30.929837    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:30.929846    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:30.934250    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:30.934257    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:30.946026    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:30.946037    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:30.958107    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:30.958120    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:30.969775    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:30.969786    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:30.981279    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:30.981290    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:31.002738    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:31.002750    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:31.015050    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:31.015063    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:31.051558    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:31.051569    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:31.066053    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:31.066065    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:31.514921    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:31.514968    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:31.079895    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:31.079905    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:31.097217    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:31.097227    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:31.122171    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:31.122180    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:31.160419    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:31.160430    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:31.174510    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:31.174522    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:33.693448    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:36.516310    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:36.516361    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:38.695555    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:38.695721    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:38.710076    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:38.710157    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:38.720834    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:38.720903    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:38.731380    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:38.731450    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:38.749357    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:38.749426    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:38.771917    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:38.771983    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:38.783379    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:38.783449    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:38.797742    4765 logs.go:276] 0 containers: []
	W0307 19:43:38.797754    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:38.797814    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:38.808440    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:38.808455    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:38.808463    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:38.813103    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:38.813112    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:38.826441    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:38.826456    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:38.843633    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:38.843646    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:38.856035    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:38.856047    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:38.891587    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:38.891597    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:38.903479    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:38.903491    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:38.920625    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:38.920636    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:38.935424    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:38.935434    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:38.959498    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:38.959509    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:38.970564    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:38.970576    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:38.985170    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:38.985185    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:39.000804    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:39.000817    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:39.012491    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:39.012504    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:39.051405    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:39.051412    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:39.090377    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:39.090387    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:41.517700    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:41.517800    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:41.608333    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:46.519777    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:46.519800    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:46.610387    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:46.610541    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:46.627309    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:46.627384    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:46.637789    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:46.637865    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:46.648289    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:46.648356    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:46.658638    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:46.658708    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:46.672692    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:46.672759    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:46.683175    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:46.683247    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:46.693564    4765 logs.go:276] 0 containers: []
	W0307 19:43:46.693576    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:46.693632    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:46.704333    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:46.704349    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:46.704355    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:46.739302    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:46.739315    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:46.779095    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:46.779110    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:46.794052    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:46.794064    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:46.806736    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:46.806746    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:46.821597    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:46.821608    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:46.837991    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:46.838004    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:46.852210    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:46.852221    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:46.862987    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:46.862998    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:46.883291    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:46.883304    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:46.898015    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:46.898026    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:46.937361    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:46.937370    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:46.941990    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:46.941996    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:46.959253    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:46.959265    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:46.983713    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:46.983724    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:46.998523    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:46.998533    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:49.512517    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:51.521809    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:51.522022    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:51.538703    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:43:51.538790    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:51.563781    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:43:51.563859    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:51.581486    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:43:51.581570    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:51.599794    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:43:51.599867    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:51.610534    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:43:51.610604    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:51.630379    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:43:51.630449    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:51.640768    4574 logs.go:276] 0 containers: []
	W0307 19:43:51.640783    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:51.640843    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:51.650676    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:43:51.650693    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:43:51.650699    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:43:51.661839    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:43:51.661853    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:43:51.675251    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:43:51.675264    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:43:51.692494    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:51.692505    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:51.730614    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:43:51.730627    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:43:51.745433    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:43:51.745446    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:43:51.759510    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:43:51.759521    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:43:51.774209    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:43:51.774220    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:43:51.786060    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:43:51.786071    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:43:51.800708    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:51.800720    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:51.835435    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:51.835449    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:51.840396    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:51.840403    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:51.865004    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:43:51.865012    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:54.514657    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:54.514835    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:54.539892    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:54.540040    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:54.555934    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:54.556018    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:54.571193    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:54.571267    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:54.589229    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:54.589300    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:54.599565    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:54.599632    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:54.611188    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:54.611252    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:54.622483    4765 logs.go:276] 0 containers: []
	W0307 19:43:54.622495    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:54.622550    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:54.633577    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:54.633592    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:54.633598    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:54.656902    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:54.656913    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:54.694936    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:54.694944    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:54.744115    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:54.744129    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:54.759062    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:54.759073    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:54.771154    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:54.771170    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:54.791368    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:54.791379    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:54.805849    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:54.805860    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:54.826243    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:54.826253    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:54.850920    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:54.850927    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:54.862601    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:54.862610    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:54.874608    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:54.874619    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:54.878991    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:54.878999    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:54.913412    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:54.913423    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:54.928034    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:54.928045    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:54.940008    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:54.940017    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:54.378266    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:57.457347    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:59.380480    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:59.380744    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:59.408685    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:43:59.408814    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:59.426836    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:43:59.426922    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:59.440507    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:43:59.440580    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:59.451787    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:43:59.451856    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:59.462525    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:43:59.462598    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:59.473744    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:43:59.473807    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:59.484025    4574 logs.go:276] 0 containers: []
	W0307 19:43:59.484040    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:59.484100    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:59.494874    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:43:59.494888    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:43:59.494894    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:43:59.507556    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:43:59.507566    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:59.519398    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:43:59.519411    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:43:59.533954    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:43:59.533963    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:43:59.548718    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:43:59.548729    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:43:59.563749    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:43:59.563760    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:43:59.576466    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:43:59.576478    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:43:59.587989    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:43:59.587999    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:43:59.617464    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:43:59.617475    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:43:59.628948    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:59.628959    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:59.652257    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:59.652267    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:59.687435    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:59.687449    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:59.691823    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:59.691830    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:02.229474    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:02.459446    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:02.459624    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:02.485985    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:02.486090    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:02.502789    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:02.502873    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:02.516445    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:02.516522    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:02.532890    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:02.532966    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:02.542889    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:02.542954    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:02.553765    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:02.553832    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:02.565782    4765 logs.go:276] 0 containers: []
	W0307 19:44:02.565793    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:02.565852    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:02.576299    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:02.576317    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:02.576321    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:02.587932    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:02.587943    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:02.592103    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:02.592109    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:02.628571    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:02.628581    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:02.642493    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:02.642505    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:02.654267    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:02.654277    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:02.696073    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:02.696083    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:02.713079    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:02.713091    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:02.727883    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:02.727893    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:02.752305    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:02.752317    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:02.791027    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:02.791035    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:02.805599    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:02.805611    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:02.820404    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:02.820416    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:02.831890    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:02.831900    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:02.852471    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:02.852483    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:02.874348    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:02.874365    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:05.397766    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:07.230708    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:07.230883    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:07.249598    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:07.249699    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:07.263867    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:07.263945    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:07.275802    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:07.275871    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:07.286795    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:07.286875    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:07.297170    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:07.297237    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:07.308072    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:07.308129    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:07.318427    4574 logs.go:276] 0 containers: []
	W0307 19:44:07.318440    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:07.318496    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:07.332573    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:07.332588    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:07.332593    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:07.367779    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:07.367791    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:07.372680    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:07.372688    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:07.384454    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:07.384465    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:07.402061    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:07.402071    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:07.416556    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:07.416566    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:07.428139    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:07.428148    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:07.464287    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:07.464299    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:07.479129    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:07.479140    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:07.494484    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:07.494495    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:07.507855    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:07.507866    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:07.520101    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:07.520111    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:07.535203    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:07.535218    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:10.400249    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:10.400569    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:10.430288    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:10.430412    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:10.448830    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:10.448924    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:10.466808    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:10.466889    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:10.479215    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:10.479289    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:10.490415    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:10.490489    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:10.501043    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:10.501108    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:10.510761    4765 logs.go:276] 0 containers: []
	W0307 19:44:10.510773    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:10.510830    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:10.521829    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:10.521844    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:10.521849    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:10.533005    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:10.533020    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:10.545024    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:10.545038    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:10.560205    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:10.560217    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:10.577257    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:10.577269    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:10.615809    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:10.615819    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:10.651754    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:10.651767    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:10.692937    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:10.692949    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:10.707108    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:10.707121    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:10.722820    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:10.722834    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:10.734556    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:10.734566    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:10.753270    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:10.753280    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:10.767201    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:10.767213    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:10.771797    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:10.771804    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:10.785794    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:10.785806    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:10.797980    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:10.797991    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:10.060106    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:13.324165    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:15.062157    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:15.062361    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:15.082197    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:15.082295    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:15.096801    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:15.096875    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:15.108573    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:15.108642    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:15.119396    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:15.119467    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:15.130606    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:15.130679    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:15.141170    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:15.141242    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:15.150967    4574 logs.go:276] 0 containers: []
	W0307 19:44:15.150982    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:15.151036    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:15.161551    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:15.161566    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:15.161571    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:15.196404    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:15.196414    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:15.203345    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:15.203357    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:15.240876    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:15.240887    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:15.252226    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:15.252236    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:15.270164    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:15.270175    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:15.287350    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:15.287363    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:15.300257    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:15.300267    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:15.316197    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:15.316212    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:15.332706    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:15.332716    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:15.344397    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:15.344406    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:15.358521    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:15.358535    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:15.376533    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:15.376548    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:18.326343    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:18.326527    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:18.354399    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:18.354520    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:18.371244    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:18.371335    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:18.384028    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:18.384099    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:18.395436    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:18.395504    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:18.406189    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:18.406253    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:18.416881    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:18.416950    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:18.426750    4765 logs.go:276] 0 containers: []
	W0307 19:44:18.426761    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:18.426816    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:18.437121    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:18.437140    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:18.437146    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:18.477709    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:18.477721    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:18.489881    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:18.489892    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:18.508393    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:18.508403    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:18.523195    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:18.523208    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:18.547671    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:18.547681    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:18.561450    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:18.561462    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:18.576053    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:18.576065    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:18.590319    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:18.590329    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:18.606465    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:18.606479    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:18.610983    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:18.610988    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:18.625408    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:18.625418    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:18.643328    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:18.643339    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:18.680806    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:18.680815    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:18.722097    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:18.722109    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:18.734321    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:18.734332    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:17.902180    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:21.250457    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:22.904338    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:22.904624    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:22.931731    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:22.931854    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:22.949388    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:22.949482    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:22.962798    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:22.962877    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:22.974514    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:22.974581    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:22.986695    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:22.986762    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:22.997472    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:22.997538    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:23.007391    4574 logs.go:276] 0 containers: []
	W0307 19:44:23.007403    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:23.007458    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:23.017982    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:23.017997    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:23.018002    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:23.041204    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:23.041210    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:23.075658    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:23.075665    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:23.089762    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:23.089773    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:23.100745    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:23.100756    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:23.115194    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:23.115205    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:23.126647    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:23.126657    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:23.138031    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:23.138041    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:23.142827    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:23.142835    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:23.186575    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:23.186587    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:23.204385    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:23.204399    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:23.216237    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:23.216247    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:23.233914    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:23.233925    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:25.751920    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:26.252657    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:26.252826    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:26.279386    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:26.279467    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:26.291263    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:26.291346    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:26.301544    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:26.301615    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:26.312492    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:26.312571    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:26.323102    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:26.323168    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:26.336059    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:26.336128    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:26.354107    4765 logs.go:276] 0 containers: []
	W0307 19:44:26.354122    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:26.354183    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:26.364944    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:26.364966    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:26.364972    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:26.369202    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:26.369209    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:26.380960    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:26.380969    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:26.398591    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:26.398606    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:26.410228    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:26.410239    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:26.423248    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:26.423258    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:26.463613    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:26.463623    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:26.499020    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:26.499033    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:26.517413    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:26.517426    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:26.528739    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:26.528750    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:26.540883    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:26.540895    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:26.554834    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:26.554846    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:26.573267    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:26.573278    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:26.588386    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:26.588396    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:26.610698    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:26.610705    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:26.651560    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:26.651571    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:29.168089    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:30.754450    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:30.754748    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:30.780997    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:30.781106    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:30.803506    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:30.803604    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:30.818037    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:30.818113    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:30.828587    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:30.828655    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:30.839033    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:30.839103    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:30.850117    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:30.850183    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:30.860676    4574 logs.go:276] 0 containers: []
	W0307 19:44:30.860689    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:30.860749    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:30.870987    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:30.871004    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:30.871009    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:30.882829    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:30.882840    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:30.895069    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:30.895080    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:30.910376    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:30.910394    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:30.928086    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:30.928101    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:30.939494    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:30.939504    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:30.954219    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:30.954234    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:30.959279    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:30.959287    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:31.004214    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:31.004230    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:31.018647    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:31.018661    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:31.031728    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:31.031742    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:31.050008    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:31.050018    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:31.074572    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:31.074579    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:34.168698    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:34.168905    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:34.194993    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:34.195110    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:34.212057    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:34.212144    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:34.225723    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:34.225797    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:34.237124    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:34.237187    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:34.247454    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:34.247534    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:34.257776    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:34.257845    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:34.269129    4765 logs.go:276] 0 containers: []
	W0307 19:44:34.269142    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:34.269197    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:34.284033    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:34.284052    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:34.284058    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:34.299962    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:34.299975    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:34.319125    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:34.319134    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:34.333956    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:34.333969    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:34.351735    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:34.351745    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:34.365802    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:34.365812    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:34.380586    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:34.380597    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:34.395804    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:34.395814    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:34.408148    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:34.408160    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:34.443620    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:34.443631    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:34.466372    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:34.466384    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:34.478063    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:34.478076    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:34.490141    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:34.490151    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:34.527901    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:34.527910    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:34.531807    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:34.531814    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:34.569089    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:34.569101    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:33.609258    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:37.082631    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:38.611412    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:38.611546    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:38.628947    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:38.629032    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:38.642107    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:38.642180    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:38.658249    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:38.658315    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:38.668979    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:38.669043    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:38.678735    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:38.678800    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:38.688847    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:38.688917    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:38.701755    4574 logs.go:276] 0 containers: []
	W0307 19:44:38.701766    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:38.701821    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:38.712967    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:38.712981    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:38.712987    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:38.724737    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:38.724748    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:38.758777    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:38.758788    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:38.763566    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:38.763580    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:38.778049    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:38.778059    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:38.792267    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:38.792280    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:38.806048    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:38.806060    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:38.817457    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:38.817470    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:38.852149    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:38.852162    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:38.863488    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:38.863498    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:38.877969    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:38.877978    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:38.904208    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:38.904222    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:38.923642    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:38.923653    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:41.449662    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:42.083584    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:42.083683    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:42.098328    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:42.098408    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:42.109239    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:42.109315    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:42.120130    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:42.120203    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:42.130675    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:42.130748    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:42.141114    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:42.141186    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:42.151869    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:42.151936    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:42.161730    4765 logs.go:276] 0 containers: []
	W0307 19:44:42.161742    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:42.161796    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:42.172375    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:42.172398    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:42.172404    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:42.176769    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:42.176775    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:42.190545    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:42.190558    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:42.204994    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:42.205009    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:42.216842    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:42.216854    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:42.230216    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:42.230227    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:42.265660    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:42.265670    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:42.279122    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:42.279131    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:42.294841    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:42.294853    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:42.306276    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:42.306288    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:42.323884    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:42.323894    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:42.339147    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:42.339158    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:42.378610    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:42.378623    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:42.417742    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:42.417755    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:42.434606    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:42.434616    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:42.446572    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:42.446584    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:44.971350    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:46.451727    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:46.451883    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:46.467290    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:46.467368    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:46.478323    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:46.478396    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:46.489410    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:46.489481    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:46.500049    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:46.500117    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:46.510646    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:46.510715    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:46.521074    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:46.521142    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:46.531615    4574 logs.go:276] 0 containers: []
	W0307 19:44:46.531625    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:46.531678    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:46.546489    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:46.546504    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:46.546510    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:46.582713    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:46.582728    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:46.587640    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:46.587648    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:46.600830    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:46.600845    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:46.615402    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:46.615411    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:46.633376    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:46.633389    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:46.657685    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:46.657697    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:46.669440    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:46.669452    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:46.709656    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:46.709668    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:46.724835    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:46.724846    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:46.738494    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:46.738506    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:46.750146    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:46.750156    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:46.762509    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:46.762524    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:49.973531    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:49.973738    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:50.003500    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:50.003618    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:50.018834    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:50.018920    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:50.031125    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:50.031199    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:50.042149    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:50.042214    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:50.056075    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:50.056148    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:50.066707    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:50.066777    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:50.076622    4765 logs.go:276] 0 containers: []
	W0307 19:44:50.076633    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:50.076690    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:50.087385    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:50.087400    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:50.087406    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:50.124441    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:50.124450    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:50.135930    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:50.135941    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:50.149002    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:50.149015    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:50.183935    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:50.183949    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:50.204980    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:50.204992    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:50.220392    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:50.220403    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:50.235891    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:50.235904    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:50.250302    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:50.250315    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:50.254472    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:50.254477    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:50.275813    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:50.275826    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:50.286825    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:50.286837    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:50.301474    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:50.301487    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:50.312893    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:50.312904    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:50.350207    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:50.350217    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:50.367675    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:50.367690    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:49.276188    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:52.893101    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:54.277085    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:54.277256    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:54.295870    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:44:54.295958    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:54.309970    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:44:54.310037    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:54.321866    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:44:54.321939    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:54.333115    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:44:54.333187    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:54.343697    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:44:54.343769    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:54.354159    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:44:54.354237    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:54.364250    4574 logs.go:276] 0 containers: []
	W0307 19:44:54.364260    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:54.364318    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:54.374346    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:44:54.374365    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:44:54.374370    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:44:54.392701    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:44:54.392712    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:44:54.404186    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:54.404197    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:54.408613    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:44:54.408621    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:44:54.425290    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:44:54.425301    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:44:54.443577    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:44:54.443590    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:44:54.455995    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:44:54.456008    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:44:54.470526    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:44:54.470537    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:44:54.485466    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:54.485476    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:54.510030    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:54.510044    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:54.545486    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:54.545498    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:54.582076    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:44:54.582091    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:44:54.605326    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:44:54.605339    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:57.119307    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:57.895427    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:57.895592    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:57.907139    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:57.907217    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:57.918867    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:57.918941    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:57.929281    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:57.929350    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:57.939880    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:57.939948    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:57.951509    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:57.951582    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:57.962089    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:57.962160    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:57.972393    4765 logs.go:276] 0 containers: []
	W0307 19:44:57.972406    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:57.972460    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:57.983111    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:57.983128    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:57.983134    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:57.994720    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:57.994732    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:58.009000    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:58.009010    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:58.026834    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:58.026845    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:58.038343    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:58.038354    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:58.058293    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:58.058303    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:58.080490    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:58.080499    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:58.117909    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:58.117917    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:58.121780    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:58.121787    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:58.137852    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:58.137863    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:58.151488    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:58.151499    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:58.166237    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:58.166250    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:58.182353    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:58.182364    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:58.194055    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:58.194066    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:58.231607    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:58.231618    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:58.268072    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:58.268083    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:00.781513    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:02.121466    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:02.121681    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:02.136089    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:02.136168    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:02.147577    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:02.147645    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:02.162713    4574 logs.go:276] 2 containers: [c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:02.162780    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:02.173133    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:02.173207    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:02.183126    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:02.183194    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:02.193898    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:02.193962    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:02.204049    4574 logs.go:276] 0 containers: []
	W0307 19:45:02.204061    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:02.204117    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:02.214321    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:02.214335    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:02.214340    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:02.225514    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:02.225524    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:02.237464    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:02.237476    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:02.271364    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:02.271375    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:02.275533    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:02.275542    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:02.293982    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:02.293993    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:02.307639    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:02.307651    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:02.318935    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:02.318946    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:02.342846    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:02.342855    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:02.354080    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:02.354094    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:02.390453    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:02.390467    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:02.410120    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:02.410133    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:02.425378    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:02.425388    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:05.783847    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:05.783992    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:05.799664    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:45:05.799771    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:05.812380    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:45:05.812456    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:05.823308    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:45:05.823373    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:05.835162    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:45:05.835237    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:05.846204    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:45:05.846267    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:05.856263    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:45:05.856335    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:05.866817    4765 logs.go:276] 0 containers: []
	W0307 19:45:05.866834    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:05.866888    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:05.877132    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:45:05.877148    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:45:05.877155    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:45:05.890837    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:45:05.890851    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:45:05.905481    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:45:05.905496    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:45:05.919742    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:45:05.919753    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:45:05.931360    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:05.931373    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:05.970880    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:45:05.970888    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:45:05.985048    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:45:05.985059    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:45:05.998843    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:45:05.998856    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:45:06.013521    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:06.013533    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:06.036594    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:45:06.036606    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:45:06.053889    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:06.053900    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:06.058410    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:06.058418    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:04.944788    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:06.093865    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:45:06.093876    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:45:06.131673    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:45:06.131691    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:45:06.144915    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:45:06.144927    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:45:06.156908    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:45:06.156920    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:08.670510    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:09.946931    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:09.947179    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:09.969703    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:09.969810    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:09.985591    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:09.985678    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:09.999151    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:09.999230    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:10.010609    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:10.010691    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:10.025617    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:10.025683    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:10.036673    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:10.036735    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:10.046721    4574 logs.go:276] 0 containers: []
	W0307 19:45:10.046735    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:10.046796    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:10.057587    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:10.057607    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:10.057613    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:10.074811    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:10.074825    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:10.079396    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:10.079404    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:10.112802    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:10.112817    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:10.127189    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:10.127198    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:10.138773    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:10.138783    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:10.150356    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:10.150367    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:10.165273    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:10.165284    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:10.177255    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:10.177268    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:10.211485    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:10.211493    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:10.234736    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:10.234746    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:10.249341    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:10.249354    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:10.263397    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:10.263410    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:10.275481    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:10.275492    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:10.291094    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:10.291106    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:12.807450    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:13.672792    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:13.673186    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:13.720616    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:45:13.720755    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:13.743971    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:45:13.744070    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:13.758482    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:45:13.758570    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:13.770057    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:45:13.770123    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:13.781503    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:45:13.781579    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:13.791749    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:45:13.791820    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:13.801699    4765 logs.go:276] 0 containers: []
	W0307 19:45:13.801709    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:13.801762    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:13.812242    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:45:13.812262    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:45:13.812267    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:45:13.827997    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:13.828014    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:13.852474    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:13.852483    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:13.890394    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:13.890403    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:13.924834    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:45:13.924848    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:45:13.938963    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:45:13.938975    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:45:13.955040    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:45:13.955051    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:45:13.970902    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:45:13.970918    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:45:13.985693    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:45:13.985705    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:45:13.999621    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:45:13.999632    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:45:14.017350    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:45:14.017360    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:45:14.029623    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:45:14.029634    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:14.041866    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:14.041877    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:14.046419    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:45:14.046426    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:45:14.084620    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:45:14.084632    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:45:14.098333    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:45:14.098344    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:45:17.809582    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:17.809792    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:17.838187    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:17.838298    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:17.856040    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:17.856136    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:16.613205    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:17.870065    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:17.871587    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:17.883299    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:17.883374    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:17.893995    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:17.894062    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:17.904399    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:17.904468    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:17.914842    4574 logs.go:276] 0 containers: []
	W0307 19:45:17.914853    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:17.914909    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:17.925316    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:17.925333    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:17.925339    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:17.939671    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:17.939685    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:17.951480    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:17.951492    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:17.955971    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:17.955977    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:17.998367    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:17.998380    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:18.011531    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:18.011545    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:18.026376    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:18.026385    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:18.038076    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:18.038085    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:18.060124    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:18.060135    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:18.072204    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:18.072217    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:18.083648    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:18.083661    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:18.118998    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:18.119007    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:18.133156    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:18.133167    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:18.144467    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:18.144478    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:18.160953    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:18.160964    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:20.689168    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:21.615306    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:21.615445    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:21.629606    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:45:21.629690    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:21.641482    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:45:21.641547    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:21.655343    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:45:21.655413    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:21.665968    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:45:21.666038    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:21.677011    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:45:21.677075    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:21.687750    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:45:21.687830    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:21.699354    4765 logs.go:276] 0 containers: []
	W0307 19:45:21.699365    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:21.699423    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:21.709995    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:45:21.710022    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:45:21.710027    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:45:21.724354    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:45:21.724368    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:45:21.738802    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:21.738813    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:21.762308    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:45:21.762320    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:45:21.777626    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:45:21.777637    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:45:21.816457    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:45:21.816473    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:45:21.828262    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:45:21.828276    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:45:21.839550    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:45:21.839562    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:45:21.854621    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:21.854635    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:21.858900    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:21.858905    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:21.894750    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:45:21.894762    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:45:21.909633    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:45:21.909647    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:45:21.926920    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:45:21.926930    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:45:21.938616    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:45:21.938626    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:21.950211    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:21.950223    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:21.988628    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:45:21.988637    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:45:24.502820    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:25.690195    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:25.690471    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:25.721688    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:25.721806    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:25.738574    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:25.738655    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:25.751676    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:25.751751    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:25.763474    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:25.763542    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:25.781017    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:25.781084    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:25.791503    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:25.791569    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:25.801202    4574 logs.go:276] 0 containers: []
	W0307 19:45:25.801213    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:25.801273    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:25.811462    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:25.811478    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:25.811483    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:25.847371    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:25.847385    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:25.861927    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:25.861938    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:25.873605    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:25.873614    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:25.887927    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:25.887938    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:25.899236    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:25.899247    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:25.911025    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:25.911037    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:25.922809    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:25.922823    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:25.934945    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:25.934956    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:25.957381    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:25.957390    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:25.982324    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:25.982332    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:26.017826    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:26.017834    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:26.022865    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:26.022874    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:26.034431    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:26.034440    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:26.049064    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:26.049075    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:29.504967    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:29.505109    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:29.518612    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:45:29.518687    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:29.529297    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:45:29.529384    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:29.540259    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:45:29.540329    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:29.551428    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:45:29.551502    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:29.561935    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:45:29.562008    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:29.576773    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:45:29.576840    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:29.587555    4765 logs.go:276] 0 containers: []
	W0307 19:45:29.587567    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:29.587627    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:29.597944    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:45:29.597961    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:45:29.597967    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:45:29.613930    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:45:29.613943    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:45:29.628595    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:45:29.628620    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:45:29.646016    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:45:29.646027    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:45:29.661089    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:45:29.661103    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:45:29.672812    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:45:29.672823    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:29.687133    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:45:29.687146    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:45:29.701162    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:45:29.701173    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:45:29.738625    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:45:29.738638    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:45:29.751240    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:45:29.751250    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:45:29.762950    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:29.762962    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:29.767047    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:29.767054    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:29.806423    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:45:29.806438    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:45:29.820216    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:45:29.820225    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:45:29.833997    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:29.834013    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:29.856341    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:29.856348    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:28.561651    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:32.397502    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:33.563783    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:33.563912    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:33.576442    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:33.576519    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:33.587344    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:33.587416    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:33.600493    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:33.600583    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:33.610989    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:33.611052    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:33.621575    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:33.621641    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:33.636042    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:33.636114    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:33.646265    4574 logs.go:276] 0 containers: []
	W0307 19:45:33.646278    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:33.646328    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:33.657341    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:33.657357    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:33.657363    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:33.692747    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:33.692760    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:33.710999    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:33.711011    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:33.726723    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:33.726737    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:33.731226    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:33.731232    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:33.746598    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:33.746609    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:33.759061    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:33.759071    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:33.776863    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:33.776877    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:33.792361    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:33.792370    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:33.815790    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:33.815799    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:33.826980    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:33.826992    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:33.843562    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:33.843574    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:33.857487    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:33.857500    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:33.868862    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:33.868875    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:33.904844    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:33.904863    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:36.418904    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:37.399992    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:37.400248    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:37.426926    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:45:37.427047    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:37.443767    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:45:37.443857    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:37.457518    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:45:37.457591    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:37.468824    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:45:37.468901    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:37.483647    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:45:37.483718    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:37.494357    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:45:37.494427    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:37.504877    4765 logs.go:276] 0 containers: []
	W0307 19:45:37.504892    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:37.504948    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:37.515598    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:45:37.515616    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:37.515621    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:37.554692    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:37.554704    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:37.558995    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:45:37.559002    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:45:37.599649    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:45:37.599661    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:45:37.615151    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:45:37.615162    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:37.627096    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:45:37.627106    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:45:37.640775    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:45:37.640786    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:45:37.655896    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:37.655906    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:37.679634    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:45:37.679643    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:45:37.693660    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:45:37.693671    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:45:37.707680    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:45:37.707691    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:45:37.718928    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:45:37.718942    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:45:37.735930    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:37.735941    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:37.773608    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:45:37.773619    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:45:37.785633    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:45:37.785644    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:45:37.797707    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:45:37.797718    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:45:40.310773    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:41.421284    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:41.421394    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:41.431988    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:41.432063    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:41.442843    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:41.442912    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:41.454001    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:41.454072    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:41.465479    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:41.465550    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:41.479694    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:41.479766    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:41.490399    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:41.490467    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:41.501122    4574 logs.go:276] 0 containers: []
	W0307 19:45:41.501135    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:41.501195    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:41.511608    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:41.511624    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:41.511629    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:41.523414    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:41.523428    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:41.536494    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:41.536507    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:41.572016    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:41.572026    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:41.576649    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:41.576655    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:41.588072    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:41.588085    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:41.608437    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:41.608450    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:41.620001    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:41.620010    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:41.643894    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:41.643904    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:41.658531    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:41.658542    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:41.692199    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:41.692210    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:41.708715    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:41.708728    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:41.720261    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:41.720273    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:41.733202    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:41.733213    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:41.751504    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:41.751519    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:45.313250    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:45.313376    4765 kubeadm.go:591] duration metric: took 4m3.784817209s to restartPrimaryControlPlane
	W0307 19:45:45.313452    4765 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0307 19:45:45.313487    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0307 19:45:46.350289    4765 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.036805708s)
	I0307 19:45:46.350361    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 19:45:46.355530    4765 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 19:45:46.358501    4765 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 19:45:46.361134    4765 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 19:45:46.361140    4765 kubeadm.go:156] found existing configuration files:
	
	I0307 19:45:46.361163    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/admin.conf
	I0307 19:45:46.363458    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 19:45:46.363478    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 19:45:46.366342    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/kubelet.conf
	I0307 19:45:46.368837    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 19:45:46.368864    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 19:45:46.371415    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/controller-manager.conf
	I0307 19:45:46.374596    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 19:45:46.374616    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 19:45:46.377559    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/scheduler.conf
	I0307 19:45:46.380110    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 19:45:46.380133    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 19:45:46.383185    4765 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 19:45:46.400841    4765 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0307 19:45:46.400871    4765 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 19:45:46.449133    4765 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 19:45:46.449187    4765 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 19:45:46.449229    4765 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 19:45:46.510220    4765 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 19:45:46.514450    4765 out.go:204]   - Generating certificates and keys ...
	I0307 19:45:46.514483    4765 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 19:45:46.514509    4765 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 19:45:46.514553    4765 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 19:45:46.514585    4765 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0307 19:45:46.514622    4765 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 19:45:46.514660    4765 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0307 19:45:46.514699    4765 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0307 19:45:46.514750    4765 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0307 19:45:46.514801    4765 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 19:45:46.514837    4765 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 19:45:46.514856    4765 kubeadm.go:309] [certs] Using the existing "sa" key
	I0307 19:45:46.514886    4765 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 19:45:46.670966    4765 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 19:45:46.946212    4765 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 19:45:47.085537    4765 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 19:45:47.142160    4765 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 19:45:47.173018    4765 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 19:45:47.173475    4765 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 19:45:47.173498    4765 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 19:45:47.252696    4765 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 19:45:44.270871    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:47.256604    4765 out.go:204]   - Booting up control plane ...
	I0307 19:45:47.256654    4765 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 19:45:47.256695    4765 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 19:45:47.256734    4765 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 19:45:47.256795    4765 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 19:45:47.261586    4765 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 19:45:51.763947    4765 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501498 seconds
	I0307 19:45:51.764056    4765 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 19:45:51.768069    4765 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 19:45:52.279053    4765 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 19:45:52.279236    4765 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-126000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 19:45:52.782658    4765 kubeadm.go:309] [bootstrap-token] Using token: es2efn.kgkj8j6c0xom9oxf
	I0307 19:45:49.273268    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:49.273379    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:49.288238    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:49.288306    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:49.299572    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:49.299646    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:49.311677    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:49.311754    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:49.322530    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:49.322605    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:49.333567    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:49.333642    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:49.344961    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:49.345031    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:49.356239    4574 logs.go:276] 0 containers: []
	W0307 19:45:49.356253    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:49.356314    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:49.369183    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:49.369203    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:49.369209    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:49.405963    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:49.405979    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:49.443897    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:49.443908    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:49.462212    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:49.462226    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:49.476394    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:49.476407    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:49.488798    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:49.488811    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:49.504533    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:49.504549    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:49.518400    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:49.518412    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:49.530564    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:49.530575    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:49.550162    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:49.550175    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:49.569085    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:49.569102    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:49.582542    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:49.582554    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:49.595545    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:49.595557    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:49.600352    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:49.600361    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:49.612989    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:49.612999    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:52.140253    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:52.789321    4765 out.go:204]   - Configuring RBAC rules ...
	I0307 19:45:52.789389    4765 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 19:45:52.796167    4765 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 19:45:52.798168    4765 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 19:45:52.798902    4765 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 19:45:52.799732    4765 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 19:45:52.800557    4765 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 19:45:52.803710    4765 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 19:45:52.985816    4765 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 19:45:53.199015    4765 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 19:45:53.199667    4765 kubeadm.go:309] 
	I0307 19:45:53.199698    4765 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 19:45:53.199700    4765 kubeadm.go:309] 
	I0307 19:45:53.199736    4765 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 19:45:53.199740    4765 kubeadm.go:309] 
	I0307 19:45:53.199763    4765 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 19:45:53.199813    4765 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 19:45:53.199876    4765 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 19:45:53.199911    4765 kubeadm.go:309] 
	I0307 19:45:53.199961    4765 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 19:45:53.199969    4765 kubeadm.go:309] 
	I0307 19:45:53.199992    4765 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 19:45:53.199995    4765 kubeadm.go:309] 
	I0307 19:45:53.200026    4765 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 19:45:53.200073    4765 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 19:45:53.200173    4765 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 19:45:53.200178    4765 kubeadm.go:309] 
	I0307 19:45:53.200253    4765 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 19:45:53.200319    4765 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 19:45:53.200324    4765 kubeadm.go:309] 
	I0307 19:45:53.200425    4765 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token es2efn.kgkj8j6c0xom9oxf \
	I0307 19:45:53.200478    4765 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8b467e6a6e778a0a4b26f2d605aedb7acbb7b86477eb1b7d8cc5affd5ffec0d5 \
	I0307 19:45:53.200493    4765 kubeadm.go:309] 	--control-plane 
	I0307 19:45:53.200495    4765 kubeadm.go:309] 
	I0307 19:45:53.200536    4765 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 19:45:53.200541    4765 kubeadm.go:309] 
	I0307 19:45:53.200580    4765 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token es2efn.kgkj8j6c0xom9oxf \
	I0307 19:45:53.200685    4765 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8b467e6a6e778a0a4b26f2d605aedb7acbb7b86477eb1b7d8cc5affd5ffec0d5 
	I0307 19:45:53.200743    4765 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 19:45:53.200748    4765 cni.go:84] Creating CNI manager for ""
	I0307 19:45:53.200756    4765 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:45:53.206539    4765 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 19:45:53.216535    4765 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 19:45:53.220013    4765 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 19:45:53.225043    4765 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 19:45:53.225096    4765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:45:53.225121    4765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-126000 minikube.k8s.io/updated_at=2024_03_07T19_45_53_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=stopped-upgrade-126000 minikube.k8s.io/primary=true
	I0307 19:45:53.268327    4765 kubeadm.go:1106] duration metric: took 43.278042ms to wait for elevateKubeSystemPrivileges
	I0307 19:45:53.268340    4765 ops.go:34] apiserver oom_adj: -16
	W0307 19:45:53.268419    4765 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 19:45:53.268428    4765 kubeadm.go:393] duration metric: took 4m11.753652s to StartCluster
	I0307 19:45:53.268438    4765 settings.go:142] acquiring lock: {Name:mka91134012bc21ec54a241fdaa124189f2aed4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:45:53.268507    4765 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:45:53.268910    4765 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/kubeconfig: {Name:mk4dcca67acc40e2ef9a6fcc3838689fa74c4a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:45:53.269271    4765 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:45:53.273509    4765 out.go:177] * Verifying Kubernetes components...
	I0307 19:45:53.269278    4765 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 19:45:53.269343    4765 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:45:53.280490    4765 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-126000"
	I0307 19:45:53.280506    4765 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-126000"
	W0307 19:45:53.280513    4765 addons.go:243] addon storage-provisioner should already be in state true
	I0307 19:45:53.280524    4765 host.go:66] Checking if "stopped-upgrade-126000" exists ...
	I0307 19:45:53.280532    4765 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-126000"
	I0307 19:45:53.280544    4765 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-126000"
	I0307 19:45:53.280508    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:45:53.281738    4765 kapi.go:59] client config for stopped-upgrade-126000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/client.key", CAFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1037a76a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 19:45:53.281854    4765 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-126000"
	W0307 19:45:53.281858    4765 addons.go:243] addon default-storageclass should already be in state true
	I0307 19:45:53.281866    4765 host.go:66] Checking if "stopped-upgrade-126000" exists ...
	I0307 19:45:53.286487    4765 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:45:53.290455    4765 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:45:53.290461    4765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 19:45:53.290467    4765 sshutil.go:53] new ssh client: &{IP:localhost Port:50475 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0307 19:45:53.291134    4765 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 19:45:53.291138    4765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 19:45:53.291142    4765 sshutil.go:53] new ssh client: &{IP:localhost Port:50475 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0307 19:45:53.374479    4765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 19:45:53.379960    4765 api_server.go:52] waiting for apiserver process to appear ...
	I0307 19:45:53.380000    4765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:45:53.384163    4765 api_server.go:72] duration metric: took 114.88475ms to wait for apiserver process to appear ...
	I0307 19:45:53.384171    4765 api_server.go:88] waiting for apiserver healthz status ...
	I0307 19:45:53.384178    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:53.414677    4765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 19:45:53.421553    4765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:45:57.142420    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:57.142584    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:57.165946    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:45:57.166024    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:57.177326    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:45:57.177400    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:57.188522    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:45:57.188600    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:57.201350    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:45:57.201424    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:57.211647    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:45:57.211716    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:57.222586    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:45:57.222653    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:57.233315    4574 logs.go:276] 0 containers: []
	W0307 19:45:57.233325    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:57.233377    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:57.243761    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:45:57.243779    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:45:57.243784    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:45:57.258052    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:45:57.258065    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:45:57.275850    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:45:57.275865    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:57.288962    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:45:57.288973    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:45:57.303892    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:57.303903    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:57.338498    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:57.338510    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:57.342727    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:45:57.342733    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:45:57.357831    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:45:57.357845    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:45:57.369740    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:45:57.369752    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:45:57.381769    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:57.381780    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:57.405361    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:45:57.405369    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:45:57.419406    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:45:57.419417    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:45:57.431780    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:45:57.431790    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:45:57.443243    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:57.443252    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:57.478599    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:45:57.478610    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:45:58.386042    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:58.386069    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:59.992551    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:03.386041    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:03.386062    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:04.994654    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:04.994808    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:05.006350    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:46:05.006416    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:05.016484    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:46:05.016556    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:05.027656    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:46:05.027725    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:05.038243    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:46:05.038311    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:05.049003    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:46:05.049066    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:05.059520    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:46:05.059589    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:05.069840    4574 logs.go:276] 0 containers: []
	W0307 19:46:05.069851    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:05.069910    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:05.085238    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:46:05.085256    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:05.085261    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:05.120912    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:05.120926    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:05.125768    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:46:05.125775    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:46:05.137836    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:46:05.137848    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:46:05.150186    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:46:05.150196    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:46:05.161720    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:05.161731    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:05.195877    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:46:05.195888    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:46:05.211081    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:05.211091    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:05.237015    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:46:05.237025    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:05.249028    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:46:05.249037    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:46:05.264904    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:46:05.264921    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:46:05.277176    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:46:05.277187    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:46:05.292124    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:46:05.292136    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:46:05.305687    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:46:05.305701    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:46:05.320399    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:46:05.320408    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:46:07.846928    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:08.386119    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:08.386143    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:12.849037    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:12.849294    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:13.386223    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:13.386247    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:12.878507    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:46:12.878630    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:12.896944    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:46:12.897031    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:12.911022    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:46:12.911093    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:12.922705    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:46:12.922777    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:12.933351    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:46:12.933419    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:12.943945    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:46:12.944015    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:12.954328    4574 logs.go:276] 0 containers: []
	W0307 19:46:12.954339    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:12.954397    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:12.965569    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:46:12.965585    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:46:12.965591    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:46:12.978172    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:12.978182    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:12.982468    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:46:12.982478    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:46:12.996310    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:46:12.996322    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:46:13.008418    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:46:13.008429    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:46:13.023553    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:46:13.023564    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:46:13.035050    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:13.035060    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:13.060092    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:13.060104    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:13.095291    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:46:13.095302    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:46:13.109938    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:46:13.109949    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:46:13.122115    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:13.122124    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:13.157515    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:46:13.157544    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:46:13.169421    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:46:13.169434    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:46:13.186946    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:46:13.186960    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:46:13.200674    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:46:13.200691    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:15.714208    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:18.386435    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:18.386478    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:20.716670    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:20.716889    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:20.733268    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:46:20.733353    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:20.750609    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:46:20.750679    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:20.761613    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:46:20.761684    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:20.772630    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:46:20.772692    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:20.782869    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:46:20.782950    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:20.793232    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:46:20.793303    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:20.803009    4574 logs.go:276] 0 containers: []
	W0307 19:46:20.803019    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:20.803073    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:20.818218    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:46:20.818239    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:46:20.818245    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:46:20.829902    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:46:20.829916    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:46:20.842238    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:46:20.842252    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:46:20.856311    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:20.856324    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:20.893639    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:46:20.893653    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:46:20.908618    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:46:20.908629    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:46:20.926939    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:46:20.926949    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:46:20.938454    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:20.938468    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:20.961100    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:20.961108    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:20.965261    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:20.965267    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:21.000341    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:46:21.000351    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:46:21.012114    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:46:21.012124    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:46:21.026768    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:46:21.026780    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:46:21.038993    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:46:21.039003    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:21.050694    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:46:21.050707    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:46:23.386900    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:23.386932    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0307 19:46:23.763325    4765 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0307 19:46:23.766731    4765 out.go:177] * Enabled addons: storage-provisioner
	I0307 19:46:23.778622    4765 addons.go:505] duration metric: took 30.510592875s for enable addons: enabled=[storage-provisioner]
	I0307 19:46:23.566286    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:28.387297    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:28.387401    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:28.568320    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:28.568574    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:28.596961    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:46:28.597071    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:28.618680    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:46:28.618765    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:28.641482    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:46:28.641564    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:28.656877    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:46:28.656946    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:28.669879    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:46:28.669940    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:28.680232    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:46:28.680322    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:28.691093    4574 logs.go:276] 0 containers: []
	W0307 19:46:28.691107    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:28.691167    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:28.701281    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:46:28.701300    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:46:28.701305    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:28.718172    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:28.718184    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:28.753473    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:28.753485    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:28.758106    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:28.758116    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:28.795209    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:46:28.795220    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:46:28.809373    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:46:28.809387    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:46:28.823072    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:46:28.823083    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:46:28.835546    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:46:28.835559    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:46:28.853270    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:46:28.853281    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:46:28.865690    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:46:28.865700    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:46:28.879755    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:28.879768    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:28.907605    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:46:28.907616    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:46:28.919125    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:46:28.919136    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:46:28.931055    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:46:28.931064    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:46:28.945616    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:46:28.945626    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:46:31.461184    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:33.388454    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:33.388479    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:36.461371    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:36.461668    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:36.492556    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:46:36.492681    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:36.511796    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:46:36.511877    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:36.526031    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:46:36.526100    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:36.543629    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:46:36.543697    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:36.557904    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:46:36.557970    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:36.568454    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:46:36.568526    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:36.578665    4574 logs.go:276] 0 containers: []
	W0307 19:46:36.578682    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:36.578737    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:36.589635    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:46:36.589651    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:46:36.589657    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:46:36.601595    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:46:36.601605    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:36.614878    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:36.614890    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:36.620057    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:36.620066    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:36.654911    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:46:36.654921    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:46:36.667020    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:46:36.667030    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:46:36.679689    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:46:36.679708    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:46:36.694597    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:46:36.694611    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:46:36.707203    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:46:36.707213    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:46:36.718800    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:46:36.718814    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:46:36.730175    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:36.730185    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:36.752733    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:36.752742    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:36.786264    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:46:36.786272    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:46:36.801688    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:46:36.801698    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:46:36.815607    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:46:36.815620    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:46:38.388788    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:38.388817    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:39.335236    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:43.389966    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:43.390711    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:44.336263    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:44.336484    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:44.351578    4574 logs.go:276] 1 containers: [32e74c443e04]
	I0307 19:46:44.351662    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:44.364262    4574 logs.go:276] 1 containers: [7d14a85cd9d0]
	I0307 19:46:44.364343    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:44.377354    4574 logs.go:276] 4 containers: [f3158ccf4712 10cf323e84db c36072e0dfe4 3dec00dfb0fe]
	I0307 19:46:44.377453    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:44.389893    4574 logs.go:276] 1 containers: [56771d857c09]
	I0307 19:46:44.389969    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:44.402217    4574 logs.go:276] 1 containers: [d4b07cda1052]
	I0307 19:46:44.402307    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:44.418663    4574 logs.go:276] 1 containers: [d8962f9a1bff]
	I0307 19:46:44.418746    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:44.429291    4574 logs.go:276] 0 containers: []
	W0307 19:46:44.429303    4574 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:44.429372    4574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:44.443159    4574 logs.go:276] 1 containers: [7458e2507075]
	I0307 19:46:44.443179    4574 logs.go:123] Gathering logs for storage-provisioner [7458e2507075] ...
	I0307 19:46:44.443185    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7458e2507075"
	I0307 19:46:44.454814    4574 logs.go:123] Gathering logs for coredns [f3158ccf4712] ...
	I0307 19:46:44.454826    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3158ccf4712"
	I0307 19:46:44.468511    4574 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:44.468523    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:44.473417    4574 logs.go:123] Gathering logs for coredns [10cf323e84db] ...
	I0307 19:46:44.473425    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10cf323e84db"
	I0307 19:46:44.485138    4574 logs.go:123] Gathering logs for kube-proxy [d4b07cda1052] ...
	I0307 19:46:44.485151    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b07cda1052"
	I0307 19:46:44.497094    4574 logs.go:123] Gathering logs for container status ...
	I0307 19:46:44.497106    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:44.509225    4574 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:44.509237    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:44.544152    4574 logs.go:123] Gathering logs for etcd [7d14a85cd9d0] ...
	I0307 19:46:44.544166    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d14a85cd9d0"
	I0307 19:46:44.558472    4574 logs.go:123] Gathering logs for coredns [3dec00dfb0fe] ...
	I0307 19:46:44.558484    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dec00dfb0fe"
	I0307 19:46:44.570108    4574 logs.go:123] Gathering logs for kube-controller-manager [d8962f9a1bff] ...
	I0307 19:46:44.570118    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8962f9a1bff"
	I0307 19:46:44.587776    4574 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:44.587787    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:44.610726    4574 logs.go:123] Gathering logs for kube-apiserver [32e74c443e04] ...
	I0307 19:46:44.610737    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e74c443e04"
	I0307 19:46:44.625127    4574 logs.go:123] Gathering logs for coredns [c36072e0dfe4] ...
	I0307 19:46:44.625138    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36072e0dfe4"
	I0307 19:46:44.637341    4574 logs.go:123] Gathering logs for kube-scheduler [56771d857c09] ...
	I0307 19:46:44.637353    4574 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56771d857c09"
	I0307 19:46:44.652862    4574 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:44.652875    4574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:47.190595    4574 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:48.392424    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:48.392472    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:52.192789    4574 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:52.198064    4574 out.go:177] 
	W0307 19:46:52.202260    4574 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0307 19:46:52.202270    4574 out.go:239] * 
	W0307 19:46:52.203114    4574 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:46:52.219041    4574 out.go:177] 
	I0307 19:46:53.394562    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:53.394798    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:53.416444    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:46:53.416525    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:53.446717    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:46:53.446784    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:53.457868    4765 logs.go:276] 2 containers: [28a42930fc0b cfa4a84ca3c0]
	I0307 19:46:53.457929    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:53.469274    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:46:53.469347    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:53.479596    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:46:53.479670    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:53.495569    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:46:53.495636    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:53.505791    4765 logs.go:276] 0 containers: []
	W0307 19:46:53.505808    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:53.505864    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:53.516345    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:46:53.516362    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:53.516368    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:53.550982    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:53.550994    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:53.586398    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:46:53.586410    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:46:53.598199    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:46:53.598214    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:46:53.609709    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:53.609721    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:53.634829    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:46:53.634837    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:53.647518    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:46:53.647533    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:46:53.666901    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:53.666918    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:53.671699    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:46:53.671711    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:46:53.687905    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:46:53.687918    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:46:53.707736    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:46:53.707749    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:46:53.721654    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:46:53.721669    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:46:53.735421    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:46:53.735434    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:46:56.258326    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:47:01.261020    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:47:01.261242    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:47:01.283327    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:47:01.283432    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:47:01.298635    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:47:01.298718    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:47:01.311294    4765 logs.go:276] 2 containers: [28a42930fc0b cfa4a84ca3c0]
	I0307 19:47:01.311367    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:47:01.322422    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:47:01.322490    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:47:01.332971    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:47:01.333047    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:47:01.343255    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:47:01.343325    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:47:01.353961    4765 logs.go:276] 0 containers: []
	W0307 19:47:01.353972    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:47:01.354028    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:47:01.363948    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:47:01.363967    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:47:01.363973    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:47:01.369200    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:47:01.369210    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:47:01.403858    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:47:01.403874    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:47:01.419546    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:47:01.419559    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:47:01.433548    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:47:01.433560    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:47:01.445419    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:47:01.445429    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:47:01.457674    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:47:01.457687    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:47:01.482077    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:47:01.482089    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:47:01.494480    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:47:01.494492    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:47:01.530164    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:47:01.530178    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:47:01.549445    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:47:01.549457    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:47:01.561708    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:47:01.561720    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:47:01.576117    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:47:01.576127    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:47:04.095693    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-03-08 03:37:46 UTC, ends at Fri 2024-03-08 03:47:08 UTC. --
	Mar 08 03:46:53 running-upgrade-440000 dockerd[3217]: time="2024-03-08T03:46:53.841376282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 03:46:53 running-upgrade-440000 dockerd[3217]: time="2024-03-08T03:46:53.841428737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 03:46:53 running-upgrade-440000 dockerd[3217]: time="2024-03-08T03:46:53.841440237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 03:46:53 running-upgrade-440000 dockerd[3217]: time="2024-03-08T03:46:53.841509608Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/bb6015c5f09904d5ff87530fc0ee3d6997c1bda21d81c4264060448058a46be2 pid=17937 runtime=io.containerd.runc.v2
	Mar 08 03:46:53 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:46:53Z" level=error msg="ContainerStats resp: {0x400074c4c0 linux}"
	Mar 08 03:46:54 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:46:54Z" level=error msg="ContainerStats resp: {0x40008e2c40 linux}"
	Mar 08 03:46:54 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:46:54Z" level=error msg="ContainerStats resp: {0x40004b0fc0 linux}"
	Mar 08 03:46:54 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:46:54Z" level=error msg="ContainerStats resp: {0x40008e30c0 linux}"
	Mar 08 03:46:54 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:46:54Z" level=error msg="ContainerStats resp: {0x40004b1f80 linux}"
	Mar 08 03:46:54 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:46:54Z" level=error msg="ContainerStats resp: {0x4000321f80 linux}"
	Mar 08 03:46:54 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:46:54Z" level=error msg="ContainerStats resp: {0x4000672400 linux}"
	Mar 08 03:46:54 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:46:54Z" level=error msg="ContainerStats resp: {0x40006725c0 linux}"
	Mar 08 03:46:58 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:46:58Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 08 03:47:03 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:47:03Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 08 03:47:04 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:47:04Z" level=error msg="ContainerStats resp: {0x400074c7c0 linux}"
	Mar 08 03:47:04 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:47:04Z" level=error msg="ContainerStats resp: {0x400074ce00 linux}"
	Mar 08 03:47:06 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:47:06Z" level=error msg="ContainerStats resp: {0x4000672600 linux}"
	Mar 08 03:47:07 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:47:07Z" level=error msg="ContainerStats resp: {0x4000672e80 linux}"
	Mar 08 03:47:07 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:47:07Z" level=error msg="ContainerStats resp: {0x4000672040 linux}"
	Mar 08 03:47:07 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:47:07Z" level=error msg="ContainerStats resp: {0x4000672240 linux}"
	Mar 08 03:47:07 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:47:07Z" level=error msg="ContainerStats resp: {0x40008e2a00 linux}"
	Mar 08 03:47:07 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:47:07Z" level=error msg="ContainerStats resp: {0x40008e3000 linux}"
	Mar 08 03:47:07 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:47:07Z" level=error msg="ContainerStats resp: {0x4000673080 linux}"
	Mar 08 03:47:07 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:47:07Z" level=error msg="ContainerStats resp: {0x40008e3b00 linux}"
	Mar 08 03:47:08 running-upgrade-440000 cri-dockerd[3059]: time="2024-03-08T03:47:08Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	bb6015c5f0990       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   e849f2cccd31e
	f026859a020d8       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   cc2fccb968a5d
	f3158ccf4712d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   e849f2cccd31e
	10cf323e84db0       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   cc2fccb968a5d
	7458e25070754       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   bd28fc719ceab
	d4b07cda10528       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   68010396c63eb
	d8962f9a1bffe       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   f7a656cc3db93
	56771d857c095       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   b2ca8995ef9b5
	7d14a85cd9d07       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   b4e4fdc00862d
	32e74c443e04f       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   1787291d63b21
	
	
	==> coredns [10cf323e84db] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4988772847399631235.7509789641229857251. HINFO: read udp 10.244.0.3:46631->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4988772847399631235.7509789641229857251. HINFO: read udp 10.244.0.3:42057->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4988772847399631235.7509789641229857251. HINFO: read udp 10.244.0.3:54644->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4988772847399631235.7509789641229857251. HINFO: read udp 10.244.0.3:54957->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4988772847399631235.7509789641229857251. HINFO: read udp 10.244.0.3:52745->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bb6015c5f099] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5056319694163042593.566649357341551856. HINFO: read udp 10.244.0.2:38702->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5056319694163042593.566649357341551856. HINFO: read udp 10.244.0.2:37879->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f026859a020d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9069287451090644262.1395478335458456019. HINFO: read udp 10.244.0.3:41129->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9069287451090644262.1395478335458456019. HINFO: read udp 10.244.0.3:43878->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f3158ccf4712] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1766864351365415737.5212152900213292378. HINFO: read udp 10.244.0.2:45748->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1766864351365415737.5212152900213292378. HINFO: read udp 10.244.0.2:56162->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1766864351365415737.5212152900213292378. HINFO: read udp 10.244.0.2:55581->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1766864351365415737.5212152900213292378. HINFO: read udp 10.244.0.2:50859->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1766864351365415737.5212152900213292378. HINFO: read udp 10.244.0.2:58529->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1766864351365415737.5212152900213292378. HINFO: read udp 10.244.0.2:47825->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1766864351365415737.5212152900213292378. HINFO: read udp 10.244.0.2:47540->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1766864351365415737.5212152900213292378. HINFO: read udp 10.244.0.2:57760->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1766864351365415737.5212152900213292378. HINFO: read udp 10.244.0.2:35222->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1766864351365415737.5212152900213292378. HINFO: read udp 10.244.0.2:34454->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-440000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-440000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=running-upgrade-440000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T19_42_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:42:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-440000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:47:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:42:51 +0000   Fri, 08 Mar 2024 03:42:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:42:51 +0000   Fri, 08 Mar 2024 03:42:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:42:51 +0000   Fri, 08 Mar 2024 03:42:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:42:51 +0000   Fri, 08 Mar 2024 03:42:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-440000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 f469c9140c334974b90aa901257e2824
	  System UUID:                f469c9140c334974b90aa901257e2824
	  Boot ID:                    1f5e3b38-acde-4d4d-810a-3e20970cb4ea
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-jv959                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-t45sm                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-440000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-440000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-440000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-proxy-plxb7                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-running-upgrade-440000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-440000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-440000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-440000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-440000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-440000 event: Registered Node running-upgrade-440000 in Controller
	
	
	==> dmesg <==
	[  +1.613231] systemd-fstab-generator[879]: Ignoring "noauto" for root device
	[  +0.084492] systemd-fstab-generator[890]: Ignoring "noauto" for root device
	[  +0.082664] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +1.137518] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.074179] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.074600] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[Mar 8 03:38] systemd-fstab-generator[1290]: Ignoring "noauto" for root device
	[ +14.170332] systemd-fstab-generator[1948]: Ignoring "noauto" for root device
	[  +2.710471] systemd-fstab-generator[2218]: Ignoring "noauto" for root device
	[  +0.133392] systemd-fstab-generator[2251]: Ignoring "noauto" for root device
	[  +0.094898] systemd-fstab-generator[2262]: Ignoring "noauto" for root device
	[  +0.093925] systemd-fstab-generator[2275]: Ignoring "noauto" for root device
	[ +12.551038] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.215757] systemd-fstab-generator[3015]: Ignoring "noauto" for root device
	[  +0.079016] systemd-fstab-generator[3027]: Ignoring "noauto" for root device
	[  +0.080766] systemd-fstab-generator[3038]: Ignoring "noauto" for root device
	[  +0.087649] systemd-fstab-generator[3052]: Ignoring "noauto" for root device
	[  +2.118594] systemd-fstab-generator[3204]: Ignoring "noauto" for root device
	[  +5.842001] systemd-fstab-generator[3596]: Ignoring "noauto" for root device
	[  +1.325227] systemd-fstab-generator[3758]: Ignoring "noauto" for root device
	[Mar 8 03:39] kauditd_printk_skb: 68 callbacks suppressed
	[Mar 8 03:42] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.300727] systemd-fstab-generator[11189]: Ignoring "noauto" for root device
	[  +5.643991] systemd-fstab-generator[11780]: Ignoring "noauto" for root device
	[  +0.470622] systemd-fstab-generator[11914]: Ignoring "noauto" for root device
	
	
	==> etcd [7d14a85cd9d0] <==
	{"level":"info","ts":"2024-03-08T03:42:46.657Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T03:42:46.657Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T03:42:46.657Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-03-08T03:42:46.657Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-08T03:42:46.657Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-08T03:42:46.657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-08T03:42:46.657Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-08T03:42:47.301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-08T03:42:47.301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-08T03:42:47.301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-08T03:42:47.301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-08T03:42:47.301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-08T03:42:47.301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-08T03:42:47.301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-08T03:42:47.302Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:42:47.306Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:42:47.306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:42:47.306Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:42:47.306Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-440000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T03:42:47.306Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:42:47.306Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-08T03:42:47.307Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:42:47.307Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T03:42:47.314Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T03:42:47.314Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 03:47:08 up 9 min,  0 users,  load average: 0.05, 0.15, 0.09
	Linux running-upgrade-440000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [32e74c443e04] <==
	I0308 03:42:48.551113       1 cache.go:39] Caches are synced for autoregister controller
	I0308 03:42:48.551193       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 03:42:48.551861       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0308 03:42:48.552506       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0308 03:42:48.552519       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0308 03:42:48.552645       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0308 03:42:48.574852       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0308 03:42:49.280777       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0308 03:42:49.459606       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0308 03:42:49.465040       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0308 03:42:49.465079       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0308 03:42:49.619716       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0308 03:42:49.629920       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0308 03:42:49.721182       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0308 03:42:49.722783       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0308 03:42:49.723194       1 controller.go:611] quota admission added evaluator for: endpoints
	I0308 03:42:49.724317       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0308 03:42:50.611653       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0308 03:42:51.236281       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0308 03:42:51.240104       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0308 03:42:51.245424       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0308 03:42:51.289715       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 03:43:05.318115       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0308 03:43:05.418135       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0308 03:43:05.847647       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [d8962f9a1bff] <==
	I0308 03:43:04.667525       1 shared_informer.go:262] Caches are synced for attach detach
	I0308 03:43:04.667556       1 shared_informer.go:262] Caches are synced for taint
	I0308 03:43:04.667593       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0308 03:43:04.667637       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-440000. Assuming now as a timestamp.
	I0308 03:43:04.667680       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0308 03:43:04.667689       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0308 03:43:04.667769       1 shared_informer.go:262] Caches are synced for disruption
	I0308 03:43:04.667775       1 disruption.go:371] Sending events to api server.
	I0308 03:43:04.667922       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0308 03:43:04.667984       1 event.go:294] "Event occurred" object="running-upgrade-440000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-440000 event: Registered Node running-upgrade-440000 in Controller"
	I0308 03:43:04.668491       1 shared_informer.go:262] Caches are synced for deployment
	I0308 03:43:04.718347       1 shared_informer.go:262] Caches are synced for job
	I0308 03:43:04.767754       1 shared_informer.go:262] Caches are synced for service account
	I0308 03:43:04.771123       1 shared_informer.go:262] Caches are synced for namespace
	I0308 03:43:04.817025       1 shared_informer.go:262] Caches are synced for cronjob
	I0308 03:43:04.818158       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0308 03:43:04.872319       1 shared_informer.go:262] Caches are synced for resource quota
	I0308 03:43:04.885465       1 shared_informer.go:262] Caches are synced for resource quota
	I0308 03:43:05.291894       1 shared_informer.go:262] Caches are synced for garbage collector
	I0308 03:43:05.321652       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-plxb7"
	I0308 03:43:05.352717       1 shared_informer.go:262] Caches are synced for garbage collector
	I0308 03:43:05.352730       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0308 03:43:05.419772       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0308 03:43:05.673720       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-jv959"
	I0308 03:43:05.675371       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-t45sm"
	
	
	==> kube-proxy [d4b07cda1052] <==
	I0308 03:43:05.829290       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0308 03:43:05.829418       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0308 03:43:05.829528       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0308 03:43:05.841918       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0308 03:43:05.841931       1 server_others.go:206] "Using iptables Proxier"
	I0308 03:43:05.842202       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0308 03:43:05.842503       1 server.go:661] "Version info" version="v1.24.1"
	I0308 03:43:05.842510       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:43:05.843074       1 config.go:317] "Starting service config controller"
	I0308 03:43:05.843232       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0308 03:43:05.843243       1 config.go:226] "Starting endpoint slice config controller"
	I0308 03:43:05.843245       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0308 03:43:05.844483       1 config.go:444] "Starting node config controller"
	I0308 03:43:05.844488       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0308 03:43:05.945243       1 shared_informer.go:262] Caches are synced for node config
	I0308 03:43:05.945268       1 shared_informer.go:262] Caches are synced for service config
	I0308 03:43:05.945284       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [56771d857c09] <==
	W0308 03:42:48.513016       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 03:42:48.513019       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 03:42:48.513062       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0308 03:42:48.513098       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0308 03:42:48.513131       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 03:42:48.513148       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 03:42:48.513267       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 03:42:48.513303       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 03:42:48.513413       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 03:42:48.513436       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 03:42:48.513488       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 03:42:48.513508       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 03:42:48.513586       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 03:42:48.513607       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 03:42:48.513770       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 03:42:48.513796       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 03:42:48.513814       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 03:42:48.513828       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0308 03:42:49.397996       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0308 03:42:49.398212       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0308 03:42:49.414549       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 03:42:49.414623       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 03:42:49.498800       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 03:42:49.498831       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0308 03:42:49.909670       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-03-08 03:37:46 UTC, ends at Fri 2024-03-08 03:47:08 UTC. --
	Mar 08 03:42:53 running-upgrade-440000 kubelet[11787]: E0308 03:42:53.276628   11787 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-440000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-440000"
	Mar 08 03:42:53 running-upgrade-440000 kubelet[11787]: I0308 03:42:53.466748   11787 request.go:601] Waited for 1.112491414s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Mar 08 03:42:53 running-upgrade-440000 kubelet[11787]: E0308 03:42:53.478272   11787 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-440000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-440000"
	Mar 08 03:43:04 running-upgrade-440000 kubelet[11787]: I0308 03:43:04.673010   11787 topology_manager.go:200] "Topology Admit Handler"
	Mar 08 03:43:04 running-upgrade-440000 kubelet[11787]: I0308 03:43:04.716145   11787 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 08 03:43:04 running-upgrade-440000 kubelet[11787]: I0308 03:43:04.716473   11787 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 08 03:43:04 running-upgrade-440000 kubelet[11787]: I0308 03:43:04.816243   11787 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmcq7\" (UniqueName: \"kubernetes.io/projected/09411336-965c-4fb1-aa9d-faee3b1b6285-kube-api-access-nmcq7\") pod \"storage-provisioner\" (UID: \"09411336-965c-4fb1-aa9d-faee3b1b6285\") " pod="kube-system/storage-provisioner"
	Mar 08 03:43:04 running-upgrade-440000 kubelet[11787]: I0308 03:43:04.816273   11787 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/09411336-965c-4fb1-aa9d-faee3b1b6285-tmp\") pod \"storage-provisioner\" (UID: \"09411336-965c-4fb1-aa9d-faee3b1b6285\") " pod="kube-system/storage-provisioner"
	Mar 08 03:43:04 running-upgrade-440000 kubelet[11787]: E0308 03:43:04.925444   11787 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 08 03:43:04 running-upgrade-440000 kubelet[11787]: E0308 03:43:04.925474   11787 projected.go:192] Error preparing data for projected volume kube-api-access-nmcq7 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 08 03:43:04 running-upgrade-440000 kubelet[11787]: E0308 03:43:04.925532   11787 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/09411336-965c-4fb1-aa9d-faee3b1b6285-kube-api-access-nmcq7 podName:09411336-965c-4fb1-aa9d-faee3b1b6285 nodeName:}" failed. No retries permitted until 2024-03-08 03:43:05.425513623 +0000 UTC m=+14.202272502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nmcq7" (UniqueName: "kubernetes.io/projected/09411336-965c-4fb1-aa9d-faee3b1b6285-kube-api-access-nmcq7") pod "storage-provisioner" (UID: "09411336-965c-4fb1-aa9d-faee3b1b6285") : configmap "kube-root-ca.crt" not found
	Mar 08 03:43:05 running-upgrade-440000 kubelet[11787]: I0308 03:43:05.322601   11787 topology_manager.go:200] "Topology Admit Handler"
	Mar 08 03:43:05 running-upgrade-440000 kubelet[11787]: I0308 03:43:05.521269   11787 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba427e03-eb00-4ee6-98c7-b4fd271e572c-xtables-lock\") pod \"kube-proxy-plxb7\" (UID: \"ba427e03-eb00-4ee6-98c7-b4fd271e572c\") " pod="kube-system/kube-proxy-plxb7"
	Mar 08 03:43:05 running-upgrade-440000 kubelet[11787]: I0308 03:43:05.521631   11787 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ch6z\" (UniqueName: \"kubernetes.io/projected/ba427e03-eb00-4ee6-98c7-b4fd271e572c-kube-api-access-8ch6z\") pod \"kube-proxy-plxb7\" (UID: \"ba427e03-eb00-4ee6-98c7-b4fd271e572c\") " pod="kube-system/kube-proxy-plxb7"
	Mar 08 03:43:05 running-upgrade-440000 kubelet[11787]: I0308 03:43:05.521673   11787 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ba427e03-eb00-4ee6-98c7-b4fd271e572c-kube-proxy\") pod \"kube-proxy-plxb7\" (UID: \"ba427e03-eb00-4ee6-98c7-b4fd271e572c\") " pod="kube-system/kube-proxy-plxb7"
	Mar 08 03:43:05 running-upgrade-440000 kubelet[11787]: I0308 03:43:05.521700   11787 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba427e03-eb00-4ee6-98c7-b4fd271e572c-lib-modules\") pod \"kube-proxy-plxb7\" (UID: \"ba427e03-eb00-4ee6-98c7-b4fd271e572c\") " pod="kube-system/kube-proxy-plxb7"
	Mar 08 03:43:05 running-upgrade-440000 kubelet[11787]: I0308 03:43:05.675559   11787 topology_manager.go:200] "Topology Admit Handler"
	Mar 08 03:43:05 running-upgrade-440000 kubelet[11787]: I0308 03:43:05.685298   11787 topology_manager.go:200] "Topology Admit Handler"
	Mar 08 03:43:05 running-upgrade-440000 kubelet[11787]: I0308 03:43:05.823918   11787 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23aa6924-7ffc-4d0a-b490-dc93c30600bc-config-volume\") pod \"coredns-6d4b75cb6d-jv959\" (UID: \"23aa6924-7ffc-4d0a-b490-dc93c30600bc\") " pod="kube-system/coredns-6d4b75cb6d-jv959"
	Mar 08 03:43:05 running-upgrade-440000 kubelet[11787]: I0308 03:43:05.823948   11787 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4dd6a46-af99-4c5e-ad82-06375c9f84f6-config-volume\") pod \"coredns-6d4b75cb6d-t45sm\" (UID: \"d4dd6a46-af99-4c5e-ad82-06375c9f84f6\") " pod="kube-system/coredns-6d4b75cb6d-t45sm"
	Mar 08 03:43:05 running-upgrade-440000 kubelet[11787]: I0308 03:43:05.823961   11787 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgc66\" (UniqueName: \"kubernetes.io/projected/23aa6924-7ffc-4d0a-b490-dc93c30600bc-kube-api-access-rgc66\") pod \"coredns-6d4b75cb6d-jv959\" (UID: \"23aa6924-7ffc-4d0a-b490-dc93c30600bc\") " pod="kube-system/coredns-6d4b75cb6d-jv959"
	Mar 08 03:43:05 running-upgrade-440000 kubelet[11787]: I0308 03:43:05.823973   11787 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6d7\" (UniqueName: \"kubernetes.io/projected/d4dd6a46-af99-4c5e-ad82-06375c9f84f6-kube-api-access-4r6d7\") pod \"coredns-6d4b75cb6d-t45sm\" (UID: \"d4dd6a46-af99-4c5e-ad82-06375c9f84f6\") " pod="kube-system/coredns-6d4b75cb6d-t45sm"
	Mar 08 03:43:06 running-upgrade-440000 kubelet[11787]: I0308 03:43:06.450591   11787 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e849f2cccd31ee87460fd17fb87cf69106a8372d757dfc3cd43f8647cb12f9c4"
	Mar 08 03:46:53 running-upgrade-440000 kubelet[11787]: I0308 03:46:53.825954   11787 scope.go:110] "RemoveContainer" containerID="3dec00dfb0fe08fb884234488cfea81f98a7c7c0040eea0b90f2d78cef418b2a"
	Mar 08 03:46:53 running-upgrade-440000 kubelet[11787]: I0308 03:46:53.889815   11787 scope.go:110] "RemoveContainer" containerID="c36072e0dfe4814d1241c61842da7c53a996782d03079f3f37c478bf9334f666"
	
	
	==> storage-provisioner [7458e2507075] <==
	I0308 03:43:05.841174       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0308 03:43:05.849673       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0308 03:43:05.849695       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0308 03:43:05.852991       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0308 03:43:05.853067       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-440000_4f708e1b-67b1-4338-912c-82de59731012!
	I0308 03:43:05.853300       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a16708e2-d3a4-4ae1-9b6f-96ec6cf36e8c", APIVersion:"v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-440000_4f708e1b-67b1-4338-912c-82de59731012 became leader
	I0308 03:43:05.953193       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-440000_4f708e1b-67b1-4338-912c-82de59731012!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-440000 -n running-upgrade-440000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-440000 -n running-upgrade-440000: exit status 2 (15.637670291s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-440000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-440000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-440000
--- FAIL: TestRunningBinaryUpgrade (634.65s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-149000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-149000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.91894075s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-149000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-149000" primary control-plane node in "kubernetes-upgrade-149000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-149000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:39:49.962811    4662 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:39:49.962933    4662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:39:49.962936    4662 out.go:304] Setting ErrFile to fd 2...
	I0307 19:39:49.962939    4662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:39:49.963073    4662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:39:49.964123    4662 out.go:298] Setting JSON to false
	I0307 19:39:49.980387    4662 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4161,"bootTime":1709865028,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:39:49.980449    4662 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:39:49.985327    4662 out.go:177] * [kubernetes-upgrade-149000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:39:49.993223    4662 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:39:49.993290    4662 notify.go:220] Checking for updates...
	I0307 19:39:50.000133    4662 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:39:50.003179    4662 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:39:50.004529    4662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:39:50.007173    4662 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:39:50.010191    4662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:39:50.013516    4662 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:39:50.013583    4662 config.go:182] Loaded profile config "running-upgrade-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:39:50.013631    4662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:39:50.018145    4662 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:39:50.025173    4662 start.go:297] selected driver: qemu2
	I0307 19:39:50.025180    4662 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:39:50.025185    4662 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:39:50.027401    4662 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:39:50.030170    4662 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:39:50.033242    4662 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 19:39:50.033279    4662 cni.go:84] Creating CNI manager for ""
	I0307 19:39:50.033287    4662 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 19:39:50.033312    4662 start.go:340] cluster config:
	{Name:kubernetes-upgrade-149000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:39:50.037741    4662 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:39:50.045194    4662 out.go:177] * Starting "kubernetes-upgrade-149000" primary control-plane node in "kubernetes-upgrade-149000" cluster
	I0307 19:39:50.049199    4662 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 19:39:50.049213    4662 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 19:39:50.049222    4662 cache.go:56] Caching tarball of preloaded images
	I0307 19:39:50.049275    4662 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:39:50.049280    4662 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 19:39:50.049338    4662 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/kubernetes-upgrade-149000/config.json ...
	I0307 19:39:50.049348    4662 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/kubernetes-upgrade-149000/config.json: {Name:mk96491d34fd48bbc48f42fdd4bc269e9b42b362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:39:50.049539    4662 start.go:360] acquireMachinesLock for kubernetes-upgrade-149000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:39:50.049571    4662 start.go:364] duration metric: took 24.416µs to acquireMachinesLock for "kubernetes-upgrade-149000"
	I0307 19:39:50.049582    4662 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:39:50.049613    4662 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:39:50.057170    4662 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:39:50.073509    4662 start.go:159] libmachine.API.Create for "kubernetes-upgrade-149000" (driver="qemu2")
	I0307 19:39:50.073535    4662 client.go:168] LocalClient.Create starting
	I0307 19:39:50.073590    4662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:39:50.073617    4662 main.go:141] libmachine: Decoding PEM data...
	I0307 19:39:50.073626    4662 main.go:141] libmachine: Parsing certificate...
	I0307 19:39:50.073675    4662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:39:50.073696    4662 main.go:141] libmachine: Decoding PEM data...
	I0307 19:39:50.073703    4662 main.go:141] libmachine: Parsing certificate...
	I0307 19:39:50.074054    4662 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:39:50.213716    4662 main.go:141] libmachine: Creating SSH key...
	I0307 19:39:50.419562    4662 main.go:141] libmachine: Creating Disk image...
	I0307 19:39:50.419570    4662 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:39:50.419784    4662 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0307 19:39:50.432244    4662 main.go:141] libmachine: STDOUT: 
	I0307 19:39:50.432264    4662 main.go:141] libmachine: STDERR: 
	I0307 19:39:50.432322    4662 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2 +20000M
	I0307 19:39:50.443670    4662 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:39:50.443686    4662 main.go:141] libmachine: STDERR: 
	I0307 19:39:50.443699    4662 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0307 19:39:50.443704    4662 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:39:50.443746    4662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:53:37:b8:2e:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0307 19:39:50.445541    4662 main.go:141] libmachine: STDOUT: 
	I0307 19:39:50.445557    4662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:39:50.445590    4662 client.go:171] duration metric: took 372.05325ms to LocalClient.Create
	I0307 19:39:52.447683    4662 start.go:128] duration metric: took 2.398149667s to createHost
	I0307 19:39:52.447772    4662 start.go:83] releasing machines lock for "kubernetes-upgrade-149000", held for 2.39829125s
	W0307 19:39:52.447810    4662 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:39:52.458169    4662 out.go:177] * Deleting "kubernetes-upgrade-149000" in qemu2 ...
	W0307 19:39:52.478697    4662 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:39:52.478715    4662 start.go:728] Will try again in 5 seconds ...
	I0307 19:39:57.480683    4662 start.go:360] acquireMachinesLock for kubernetes-upgrade-149000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:39:57.481082    4662 start.go:364] duration metric: took 319.458µs to acquireMachinesLock for "kubernetes-upgrade-149000"
	I0307 19:39:57.481203    4662 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:39:57.481467    4662 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:39:57.490963    4662 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:39:57.532868    4662 start.go:159] libmachine.API.Create for "kubernetes-upgrade-149000" (driver="qemu2")
	I0307 19:39:57.532911    4662 client.go:168] LocalClient.Create starting
	I0307 19:39:57.533040    4662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:39:57.533103    4662 main.go:141] libmachine: Decoding PEM data...
	I0307 19:39:57.533122    4662 main.go:141] libmachine: Parsing certificate...
	I0307 19:39:57.533185    4662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:39:57.533221    4662 main.go:141] libmachine: Decoding PEM data...
	I0307 19:39:57.533233    4662 main.go:141] libmachine: Parsing certificate...
	I0307 19:39:57.533745    4662 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:39:57.683096    4662 main.go:141] libmachine: Creating SSH key...
	I0307 19:39:57.782945    4662 main.go:141] libmachine: Creating Disk image...
	I0307 19:39:57.782952    4662 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:39:57.783137    4662 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0307 19:39:57.795634    4662 main.go:141] libmachine: STDOUT: 
	I0307 19:39:57.795653    4662 main.go:141] libmachine: STDERR: 
	I0307 19:39:57.795719    4662 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2 +20000M
	I0307 19:39:57.806810    4662 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:39:57.806824    4662 main.go:141] libmachine: STDERR: 
	I0307 19:39:57.806840    4662 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0307 19:39:57.806853    4662 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:39:57.806892    4662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:df:81:44:d9:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0307 19:39:57.808591    4662 main.go:141] libmachine: STDOUT: 
	I0307 19:39:57.808605    4662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:39:57.808620    4662 client.go:171] duration metric: took 275.715292ms to LocalClient.Create
	I0307 19:39:59.810735    4662 start.go:128] duration metric: took 2.329326958s to createHost
	I0307 19:39:59.810842    4662 start.go:83] releasing machines lock for "kubernetes-upgrade-149000", held for 2.32983425s
	W0307 19:39:59.811234    4662 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-149000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-149000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:39:59.821893    4662 out.go:177] 
	W0307 19:39:59.827029    4662 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:39:59.827132    4662 out.go:239] * 
	* 
	W0307 19:39:59.829562    4662 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:39:59.840357    4662 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-149000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-149000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-149000: (3.672744125s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-149000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-149000 status --format={{.Host}}: exit status 7 (63.396042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-149000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-149000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.187439916s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-149000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-149000" primary control-plane node in "kubernetes-upgrade-149000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-149000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-149000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:40:03.621852    4704 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:40:03.622003    4704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:40:03.622006    4704 out.go:304] Setting ErrFile to fd 2...
	I0307 19:40:03.622009    4704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:40:03.622145    4704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:40:03.623275    4704 out.go:298] Setting JSON to false
	I0307 19:40:03.639581    4704 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4175,"bootTime":1709865028,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:40:03.639647    4704 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:40:03.643766    4704 out.go:177] * [kubernetes-upgrade-149000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:40:03.651735    4704 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:40:03.651775    4704 notify.go:220] Checking for updates...
	I0307 19:40:03.655664    4704 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:40:03.658759    4704 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:40:03.660212    4704 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:40:03.663694    4704 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:40:03.666720    4704 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:40:03.669956    4704 config.go:182] Loaded profile config "kubernetes-upgrade-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0307 19:40:03.670219    4704 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:40:03.674706    4704 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:40:03.681686    4704 start.go:297] selected driver: qemu2
	I0307 19:40:03.681691    4704 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:40:03.681745    4704 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:40:03.683954    4704 cni.go:84] Creating CNI manager for ""
	I0307 19:40:03.683973    4704 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:40:03.683995    4704 start.go:340] cluster config:
	{Name:kubernetes-upgrade-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-149000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:40:03.688305    4704 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:40:03.695720    4704 out.go:177] * Starting "kubernetes-upgrade-149000" primary control-plane node in "kubernetes-upgrade-149000" cluster
	I0307 19:40:03.699661    4704 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 19:40:03.699680    4704 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 19:40:03.699691    4704 cache.go:56] Caching tarball of preloaded images
	I0307 19:40:03.699737    4704 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:40:03.699743    4704 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0307 19:40:03.699793    4704 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/kubernetes-upgrade-149000/config.json ...
	I0307 19:40:03.700261    4704 start.go:360] acquireMachinesLock for kubernetes-upgrade-149000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:40:03.700285    4704 start.go:364] duration metric: took 18.125µs to acquireMachinesLock for "kubernetes-upgrade-149000"
	I0307 19:40:03.700293    4704 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:40:03.700299    4704 fix.go:54] fixHost starting: 
	I0307 19:40:03.700409    4704 fix.go:112] recreateIfNeeded on kubernetes-upgrade-149000: state=Stopped err=<nil>
	W0307 19:40:03.700419    4704 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:40:03.704680    4704 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-149000" ...
	I0307 19:40:03.712715    4704 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:df:81:44:d9:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0307 19:40:03.714648    4704 main.go:141] libmachine: STDOUT: 
	I0307 19:40:03.714667    4704 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:40:03.714703    4704 fix.go:56] duration metric: took 14.402666ms for fixHost
	I0307 19:40:03.714708    4704 start.go:83] releasing machines lock for "kubernetes-upgrade-149000", held for 14.419417ms
	W0307 19:40:03.714714    4704 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:40:03.714754    4704 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:40:03.714759    4704 start.go:728] Will try again in 5 seconds ...
	I0307 19:40:08.716725    4704 start.go:360] acquireMachinesLock for kubernetes-upgrade-149000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:40:08.717098    4704 start.go:364] duration metric: took 287.5µs to acquireMachinesLock for "kubernetes-upgrade-149000"
	I0307 19:40:08.717210    4704 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:40:08.717231    4704 fix.go:54] fixHost starting: 
	I0307 19:40:08.717961    4704 fix.go:112] recreateIfNeeded on kubernetes-upgrade-149000: state=Stopped err=<nil>
	W0307 19:40:08.718006    4704 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:40:08.727492    4704 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-149000" ...
	I0307 19:40:08.731670    4704 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:df:81:44:d9:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0307 19:40:08.741324    4704 main.go:141] libmachine: STDOUT: 
	I0307 19:40:08.741423    4704 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:40:08.741504    4704 fix.go:56] duration metric: took 24.273542ms for fixHost
	I0307 19:40:08.741521    4704 start.go:83] releasing machines lock for "kubernetes-upgrade-149000", held for 24.373375ms
	W0307 19:40:08.741687    4704 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:40:08.749483    4704 out.go:177] 
	W0307 19:40:08.752671    4704 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:40:08.752696    4704 out.go:239] * 
	* 
	W0307 19:40:08.754570    4704 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:40:08.763538    4704 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-149000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-149000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-149000 version --output=json: exit status 1 (58.724792ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-149000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-07 19:40:08.837243 -0800 PST m=+2670.171037542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-149000 -n kubernetes-upgrade-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-149000 -n kubernetes-upgrade-149000: exit status 7 (34.76525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-149000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-149000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-149000
--- FAIL: TestKubernetesUpgrade (19.04s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.99s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18333
- KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current137252341/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.99s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18333
- KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2675904802/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (580.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3363077665 start -p stopped-upgrade-126000 --memory=2200 --vm-driver=qemu2 
E0307 19:40:44.658627    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3363077665 start -p stopped-upgrade-126000 --memory=2200 --vm-driver=qemu2 : (44.833073084s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3363077665 -p stopped-upgrade-126000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3363077665 -p stopped-upgrade-126000 stop: (12.120787417s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-126000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0307 19:42:37.691866    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:45:40.752241    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:45:44.645002    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-126000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m43.277803333s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-126000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-126000" primary control-plane node in "stopped-upgrade-126000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-126000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:41:11.077647    4765 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:41:11.077797    4765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:41:11.077801    4765 out.go:304] Setting ErrFile to fd 2...
	I0307 19:41:11.077804    4765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:41:11.077974    4765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:41:11.079172    4765 out.go:298] Setting JSON to false
	I0307 19:41:11.098580    4765 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4243,"bootTime":1709865028,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:41:11.098648    4765 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:41:11.102250    4765 out.go:177] * [stopped-upgrade-126000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:41:11.110269    4765 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:41:11.115099    4765 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:41:11.110282    4765 notify.go:220] Checking for updates...
	I0307 19:41:11.121167    4765 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:41:11.124080    4765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:41:11.127120    4765 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:41:11.130171    4765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:41:11.133357    4765 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:41:11.137086    4765 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 19:41:11.140130    4765 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:41:11.143065    4765 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:41:11.150133    4765 start.go:297] selected driver: qemu2
	I0307 19:41:11.150138    4765 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50510 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 19:41:11.150186    4765 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:41:11.152644    4765 cni.go:84] Creating CNI manager for ""
	I0307 19:41:11.152662    4765 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:41:11.152692    4765 start.go:340] cluster config:
	{Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50510 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 19:41:11.152741    4765 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:41:11.160096    4765 out.go:177] * Starting "stopped-upgrade-126000" primary control-plane node in "stopped-upgrade-126000" cluster
	I0307 19:41:11.164136    4765 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 19:41:11.164152    4765 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0307 19:41:11.164162    4765 cache.go:56] Caching tarball of preloaded images
	I0307 19:41:11.164226    4765 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:41:11.164233    4765 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0307 19:41:11.164289    4765 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/config.json ...
	I0307 19:41:11.164764    4765 start.go:360] acquireMachinesLock for stopped-upgrade-126000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:41:11.164798    4765 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "stopped-upgrade-126000"
	I0307 19:41:11.164806    4765 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:41:11.164811    4765 fix.go:54] fixHost starting: 
	I0307 19:41:11.164920    4765 fix.go:112] recreateIfNeeded on stopped-upgrade-126000: state=Stopped err=<nil>
	W0307 19:41:11.164929    4765 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:41:11.173130    4765 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-126000" ...
	I0307 19:41:11.177147    4765 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50475-:22,hostfwd=tcp::50476-:2376,hostname=stopped-upgrade-126000 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/disk.qcow2
	I0307 19:41:11.223378    4765 main.go:141] libmachine: STDOUT: 
	I0307 19:41:11.223406    4765 main.go:141] libmachine: STDERR: 
	I0307 19:41:11.223414    4765 main.go:141] libmachine: Waiting for VM to start (ssh -p 50475 docker@127.0.0.1)...
	I0307 19:41:31.348388    4765 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/config.json ...
	I0307 19:41:31.348995    4765 machine.go:94] provisionDockerMachine start ...
	I0307 19:41:31.349086    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:31.349405    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:31.349417    4765 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 19:41:31.420617    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 19:41:31.420640    4765 buildroot.go:166] provisioning hostname "stopped-upgrade-126000"
	I0307 19:41:31.420724    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:31.420898    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:31.420907    4765 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-126000 && echo "stopped-upgrade-126000" | sudo tee /etc/hostname
	I0307 19:41:31.488053    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-126000
	
	I0307 19:41:31.488108    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:31.488217    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:31.488227    4765 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-126000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-126000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-126000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 19:41:31.550310    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 19:41:31.550327    4765 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18333-1199/.minikube CaCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18333-1199/.minikube}
	I0307 19:41:31.550342    4765 buildroot.go:174] setting up certificates
	I0307 19:41:31.550346    4765 provision.go:84] configureAuth start
	I0307 19:41:31.550350    4765 provision.go:143] copyHostCerts
	I0307 19:41:31.550420    4765 exec_runner.go:144] found /Users/jenkins/minikube-integration/18333-1199/.minikube/cert.pem, removing ...
	I0307 19:41:31.550428    4765 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18333-1199/.minikube/cert.pem
	I0307 19:41:31.550542    4765 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/cert.pem (1123 bytes)
	I0307 19:41:31.550705    4765 exec_runner.go:144] found /Users/jenkins/minikube-integration/18333-1199/.minikube/key.pem, removing ...
	I0307 19:41:31.550710    4765 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18333-1199/.minikube/key.pem
	I0307 19:41:31.550792    4765 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/key.pem (1675 bytes)
	I0307 19:41:31.550927    4765 exec_runner.go:144] found /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.pem, removing ...
	I0307 19:41:31.550932    4765 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.pem
	I0307 19:41:31.550989    4765 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.pem (1082 bytes)
	I0307 19:41:31.551076    4765 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-126000 san=[127.0.0.1 localhost minikube stopped-upgrade-126000]
	I0307 19:41:31.670070    4765 provision.go:177] copyRemoteCerts
	I0307 19:41:31.670099    4765 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 19:41:31.670107    4765 sshutil.go:53] new ssh client: &{IP:localhost Port:50475 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0307 19:41:31.699644    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 19:41:31.706513    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0307 19:41:31.713658    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 19:41:31.720262    4765 provision.go:87] duration metric: took 169.915375ms to configureAuth
	I0307 19:41:31.720270    4765 buildroot.go:189] setting minikube options for container-runtime
	I0307 19:41:31.720362    4765 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:41:31.720396    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:31.720476    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:31.720480    4765 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 19:41:31.778463    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 19:41:31.778470    4765 buildroot.go:70] root file system type: tmpfs
	I0307 19:41:31.778519    4765 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 19:41:31.778570    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:31.778670    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:31.778706    4765 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 19:41:31.843540    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 19:41:31.843597    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:31.843710    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:31.843718    4765 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 19:41:32.216660    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 19:41:32.216676    4765 machine.go:97] duration metric: took 867.707166ms to provisionDockerMachine
	I0307 19:41:32.216684    4765 start.go:293] postStartSetup for "stopped-upgrade-126000" (driver="qemu2")
	I0307 19:41:32.216691    4765 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 19:41:32.216760    4765 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 19:41:32.216770    4765 sshutil.go:53] new ssh client: &{IP:localhost Port:50475 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0307 19:41:32.249423    4765 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 19:41:32.250652    4765 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 19:41:32.250661    4765 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18333-1199/.minikube/addons for local assets ...
	I0307 19:41:32.250939    4765 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18333-1199/.minikube/files for local assets ...
	I0307 19:41:32.251054    4765 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem -> 16202.pem in /etc/ssl/certs
	I0307 19:41:32.251176    4765 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 19:41:32.254050    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem --> /etc/ssl/certs/16202.pem (1708 bytes)
	I0307 19:41:32.260880    4765 start.go:296] duration metric: took 44.192375ms for postStartSetup
	I0307 19:41:32.260895    4765 fix.go:56] duration metric: took 21.096966667s for fixHost
	I0307 19:41:32.260931    4765 main.go:141] libmachine: Using SSH client type: native
	I0307 19:41:32.261072    4765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024b1a30] 0x1024b4290 <nil>  [] 0s} localhost 50475 <nil> <nil>}
	I0307 19:41:32.261076    4765 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0307 19:41:32.319677    4765 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709869292.044635212
	
	I0307 19:41:32.319684    4765 fix.go:216] guest clock: 1709869292.044635212
	I0307 19:41:32.319688    4765 fix.go:229] Guest: 2024-03-07 19:41:32.044635212 -0800 PST Remote: 2024-03-07 19:41:32.260897 -0800 PST m=+21.218306209 (delta=-216.261788ms)
	I0307 19:41:32.319702    4765 fix.go:200] guest clock delta is within tolerance: -216.261788ms
	I0307 19:41:32.319705    4765 start.go:83] releasing machines lock for "stopped-upgrade-126000", held for 21.155785833s
	I0307 19:41:32.319767    4765 ssh_runner.go:195] Run: cat /version.json
	I0307 19:41:32.319769    4765 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 19:41:32.319775    4765 sshutil.go:53] new ssh client: &{IP:localhost Port:50475 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0307 19:41:32.319785    4765 sshutil.go:53] new ssh client: &{IP:localhost Port:50475 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	W0307 19:41:32.320363    4765 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50475: connect: connection refused
	I0307 19:41:32.320387    4765 retry.go:31] will retry after 340.433681ms: dial tcp [::1]:50475: connect: connection refused
	W0307 19:41:32.350458    4765 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0307 19:41:32.350521    4765 ssh_runner.go:195] Run: systemctl --version
	I0307 19:41:32.352229    4765 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 19:41:32.353701    4765 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 19:41:32.353724    4765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0307 19:41:32.356404    4765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0307 19:41:32.360499    4765 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 19:41:32.360506    4765 start.go:494] detecting cgroup driver to use...
	I0307 19:41:32.360581    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 19:41:32.368307    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0307 19:41:32.371432    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 19:41:32.374744    4765 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 19:41:32.374768    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 19:41:32.378301    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 19:41:32.381824    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 19:41:32.385172    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 19:41:32.388312    4765 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 19:41:32.391235    4765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 19:41:32.394626    4765 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 19:41:32.397827    4765 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 19:41:32.400806    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:41:32.485009    4765 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 19:41:32.492771    4765 start.go:494] detecting cgroup driver to use...
	I0307 19:41:32.492843    4765 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 19:41:32.504533    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 19:41:32.512059    4765 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 19:41:32.517821    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 19:41:32.522535    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 19:41:32.527437    4765 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 19:41:32.566404    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 19:41:32.571691    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 19:41:32.576951    4765 ssh_runner.go:195] Run: which cri-dockerd
	I0307 19:41:32.578364    4765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 19:41:32.581400    4765 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 19:41:32.586259    4765 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 19:41:32.676114    4765 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 19:41:32.765927    4765 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 19:41:32.765990    4765 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 19:41:32.772147    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:41:32.853544    4765 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 19:41:34.007193    4765 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.153679459s)
	I0307 19:41:34.007248    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 19:41:34.012175    4765 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 19:41:34.018113    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 19:41:34.023136    4765 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 19:41:34.106094    4765 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 19:41:34.180924    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:41:34.258148    4765 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 19:41:34.264529    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 19:41:34.268758    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:41:34.341199    4765 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 19:41:34.380692    4765 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 19:41:34.380774    4765 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 19:41:34.382704    4765 start.go:562] Will wait 60s for crictl version
	I0307 19:41:34.382744    4765 ssh_runner.go:195] Run: which crictl
	I0307 19:41:34.384811    4765 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 19:41:34.399165    4765 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0307 19:41:34.399233    4765 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 19:41:34.415635    4765 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 19:41:34.437015    4765 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0307 19:41:34.437078    4765 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0307 19:41:34.438370    4765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 19:41:34.442305    4765 kubeadm.go:877] updating cluster {Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50510 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0307 19:41:34.442354    4765 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 19:41:34.442397    4765 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 19:41:34.453578    4765 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 19:41:34.453587    4765 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 19:41:34.453630    4765 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 19:41:34.457060    4765 ssh_runner.go:195] Run: which lz4
	I0307 19:41:34.458355    4765 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0307 19:41:34.459608    4765 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 19:41:34.459618    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0307 19:41:35.223254    4765 docker.go:649] duration metric: took 764.965666ms to copy over tarball
	I0307 19:41:35.223307    4765 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 19:41:36.408602    4765 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.185331708s)
	I0307 19:41:36.408615    4765 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 19:41:36.424179    4765 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 19:41:36.427107    4765 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0307 19:41:36.432308    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:41:36.512715    4765 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 19:41:38.025115    4765 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.512446s)
	I0307 19:41:38.025197    4765 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 19:41:38.038670    4765 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 19:41:38.038678    4765 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 19:41:38.038683    4765 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0307 19:41:38.045113    4765 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:41:38.045113    4765 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0307 19:41:38.045178    4765 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:41:38.045190    4765 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:41:38.045292    4765 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:41:38.045491    4765 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:41:38.045765    4765 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:41:38.045914    4765 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0307 19:41:38.054834    4765 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0307 19:41:38.054988    4765 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:41:38.055930    4765 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:41:38.056025    4765 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:41:38.055983    4765 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:41:38.056110    4765 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:41:38.056119    4765 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0307 19:41:38.056190    4765 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:41:40.020396    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0307 19:41:40.058577    4765 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0307 19:41:40.058626    4765 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0307 19:41:40.058722    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0307 19:41:40.079112    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0307 19:41:40.079264    4765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0307 19:41:40.082446    4765 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0307 19:41:40.082464    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0307 19:41:40.090039    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:41:40.093108    4765 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0307 19:41:40.093120    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0307 19:41:40.102443    4765 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0307 19:41:40.102466    4765 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:41:40.102522    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 19:41:40.132333    4765 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0307 19:41:40.132381    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0307 19:41:40.135629    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:41:40.145047    4765 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0307 19:41:40.145067    4765 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:41:40.145116    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0307 19:41:40.149102    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:41:40.160079    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0307 19:41:40.160409    4765 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0307 19:41:40.160530    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:41:40.167976    4765 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0307 19:41:40.167996    4765 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:41:40.168046    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0307 19:41:40.170167    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:41:40.173707    4765 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0307 19:41:40.173726    4765 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:41:40.173767    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0307 19:41:40.174111    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0307 19:41:40.184959    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0307 19:41:40.193408    4765 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0307 19:41:40.193431    4765 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:41:40.193490    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0307 19:41:40.199070    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0307 19:41:40.199090    4765 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0307 19:41:40.199108    4765 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0307 19:41:40.199155    4765 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0307 19:41:40.199175    4765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0307 19:41:40.205413    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0307 19:41:40.205434    4765 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0307 19:41:40.205448    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0307 19:41:40.217088    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0307 19:41:40.217201    4765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0307 19:41:40.218820    4765 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0307 19:41:40.218836    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0307 19:41:40.293988    4765 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0307 19:41:40.294003    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0307 19:41:40.436201    4765 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0307 19:41:40.438801    4765 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0307 19:41:40.438810    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0307 19:41:40.582198    4765 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0307 19:41:40.742276    4765 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0307 19:41:40.742487    4765 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:41:40.765172    4765 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0307 19:41:40.765202    4765 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:41:40.765278    4765 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:41:40.785582    4765 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0307 19:41:40.785714    4765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0307 19:41:40.787420    4765 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0307 19:41:40.787435    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0307 19:41:40.813789    4765 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0307 19:41:40.813803    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0307 19:41:41.065161    4765 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0307 19:41:41.065194    4765 cache_images.go:92] duration metric: took 3.026602709s to LoadCachedImages
	W0307 19:41:41.065236    4765 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0307 19:41:41.065276    4765 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0307 19:41:41.065334    4765 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-126000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 19:41:41.065395    4765 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 19:41:41.080217    4765 cni.go:84] Creating CNI manager for ""
	I0307 19:41:41.080495    4765 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:41:41.080507    4765 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 19:41:41.080516    4765 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-126000 NodeName:stopped-upgrade-126000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 19:41:41.080581    4765 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-126000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 19:41:41.080957    4765 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0307 19:41:41.083924    4765 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 19:41:41.083974    4765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 19:41:41.086744    4765 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0307 19:41:41.091857    4765 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 19:41:41.097042    4765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0307 19:41:41.103476    4765 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0307 19:41:41.104826    4765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 19:41:41.108649    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:41:41.192473    4765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 19:41:41.197796    4765 certs.go:68] Setting up /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000 for IP: 10.0.2.15
	I0307 19:41:41.197808    4765 certs.go:194] generating shared ca certs ...
	I0307 19:41:41.197816    4765 certs.go:226] acquiring lock for ca certs: {Name:mkeed6c4d5ba27d3ef2bc04c52c43819ca546cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:41:41.197965    4765 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.key
	I0307 19:41:41.198014    4765 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.key
	I0307 19:41:41.198019    4765 certs.go:256] generating profile certs ...
	I0307 19:41:41.198090    4765 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/client.key
	I0307 19:41:41.198108    4765 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522
	I0307 19:41:41.198120    4765 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0307 19:41:41.392227    4765 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522 ...
	I0307 19:41:41.392243    4765 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522: {Name:mkb5d7319d65594aa8434f1dd9aee32ab3bfe11a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:41:41.393623    4765 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522 ...
	I0307 19:41:41.393641    4765 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522: {Name:mk6e53ce5f1bfbe4a87d76c16cf03e10911c4d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:41:41.393787    4765 certs.go:381] copying /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.crt
	I0307 19:41:41.393919    4765 certs.go:385] copying /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.key
	I0307 19:41:41.394171    4765 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/proxy-client.key
	I0307 19:41:41.394292    4765 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/1620.pem (1338 bytes)
	W0307 19:41:41.394325    4765 certs.go:480] ignoring /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/1620_empty.pem, impossibly tiny 0 bytes
	I0307 19:41:41.394331    4765 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 19:41:41.394352    4765 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem (1082 bytes)
	I0307 19:41:41.394374    4765 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem (1123 bytes)
	I0307 19:41:41.394394    4765 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/key.pem (1675 bytes)
	I0307 19:41:41.394429    4765 certs.go:484] found cert: /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem (1708 bytes)
	I0307 19:41:41.394742    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 19:41:41.401871    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 19:41:41.409023    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 19:41:41.416483    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 19:41:41.423627    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0307 19:41:41.430343    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 19:41:41.438084    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 19:41:41.445024    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 19:41:41.451555    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/1620.pem --> /usr/share/ca-certificates/1620.pem (1338 bytes)
	I0307 19:41:41.458120    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/ssl/certs/16202.pem --> /usr/share/ca-certificates/16202.pem (1708 bytes)
	I0307 19:41:41.465916    4765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 19:41:41.474603    4765 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 19:41:41.480875    4765 ssh_runner.go:195] Run: openssl version
	I0307 19:41:41.483123    4765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 19:41:41.486575    4765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:41:41.488317    4765 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:57 /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:41:41.488356    4765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 19:41:41.490153    4765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 19:41:41.493488    4765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1620.pem && ln -fs /usr/share/ca-certificates/1620.pem /etc/ssl/certs/1620.pem"
	I0307 19:41:41.496679    4765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1620.pem
	I0307 19:41:41.497995    4765 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:04 /usr/share/ca-certificates/1620.pem
	I0307 19:41:41.498015    4765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1620.pem
	I0307 19:41:41.499778    4765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1620.pem /etc/ssl/certs/51391683.0"
	I0307 19:41:41.502681    4765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16202.pem && ln -fs /usr/share/ca-certificates/16202.pem /etc/ssl/certs/16202.pem"
	I0307 19:41:41.506068    4765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16202.pem
	I0307 19:41:41.507540    4765 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:04 /usr/share/ca-certificates/16202.pem
	I0307 19:41:41.507563    4765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16202.pem
	I0307 19:41:41.509311    4765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16202.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 19:41:41.512164    4765 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 19:41:41.513621    4765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 19:41:41.516214    4765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 19:41:41.518051    4765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 19:41:41.519999    4765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 19:41:41.521697    4765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 19:41:41.523403    4765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 19:41:41.525096    4765 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50510 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 19:41:41.525158    4765 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 19:41:41.535479    4765 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 19:41:41.538534    4765 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 19:41:41.538540    4765 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 19:41:41.538543    4765 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 19:41:41.538565    4765 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 19:41:41.541892    4765 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 19:41:41.542190    4765 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-126000" does not appear in /Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:41:41.542280    4765 kubeconfig.go:62] /Users/jenkins/minikube-integration/18333-1199/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-126000" cluster setting kubeconfig missing "stopped-upgrade-126000" context setting]
	I0307 19:41:41.542473    4765 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/kubeconfig: {Name:mk4dcca67acc40e2ef9a6fcc3838689fa74c4a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:41:41.542943    4765 kapi.go:59] client config for stopped-upgrade-126000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/client.key", CAFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1037a76a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 19:41:41.543254    4765 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 19:41:41.546053    4765 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-126000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0307 19:41:41.546058    4765 kubeadm.go:1153] stopping kube-system containers ...
	I0307 19:41:41.546096    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 19:41:41.556623    4765 docker.go:483] Stopping containers: [3b6448caa1dd 5edfe0ffe4cd 6572e576175a 31a1ca5c904b 095cdd1dff64 ab5e0688264a ad5a1b9317e8 3b2ae43e4bd5]
	I0307 19:41:41.556688    4765 ssh_runner.go:195] Run: docker stop 3b6448caa1dd 5edfe0ffe4cd 6572e576175a 31a1ca5c904b 095cdd1dff64 ab5e0688264a ad5a1b9317e8 3b2ae43e4bd5
	I0307 19:41:41.567749    4765 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 19:41:41.573257    4765 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 19:41:41.575874    4765 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 19:41:41.575879    4765 kubeadm.go:156] found existing configuration files:
	
	I0307 19:41:41.575901    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/admin.conf
	I0307 19:41:41.578470    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 19:41:41.578495    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 19:41:41.581528    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/kubelet.conf
	I0307 19:41:41.584316    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 19:41:41.584344    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 19:41:41.587061    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/controller-manager.conf
	I0307 19:41:41.589932    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 19:41:41.589955    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 19:41:41.592994    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/scheduler.conf
	I0307 19:41:41.595416    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 19:41:41.595434    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 19:41:41.598139    4765 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 19:41:41.601177    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:41:41.621697    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:41:42.009916    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:41:42.133088    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:41:42.155455    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 19:41:42.173706    4765 api_server.go:52] waiting for apiserver process to appear ...
	I0307 19:41:42.173801    4765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:41:42.675985    4765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:41:43.175832    4765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:41:43.179782    4765 api_server.go:72] duration metric: took 1.006120917s to wait for apiserver process to appear ...
	I0307 19:41:43.179790    4765 api_server.go:88] waiting for apiserver healthz status ...
	I0307 19:41:43.179799    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:48.180154    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:48.180191    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:53.181520    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:53.181563    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:41:58.181729    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:41:58.181771    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:03.182010    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:03.182105    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:08.182765    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:08.182809    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:13.183430    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:13.183495    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:18.184277    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:18.184325    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:23.185481    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:23.185562    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:28.186452    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:28.186467    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:33.188167    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:33.188274    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:38.190701    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:38.190779    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:43.191371    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:43.191761    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:43.224995    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:42:43.225130    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:43.244528    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:42:43.244616    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:43.266404    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:42:43.266481    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:43.278052    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:42:43.278116    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:43.288540    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:42:43.288614    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:43.299360    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:42:43.299431    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:43.309748    4765 logs.go:276] 0 containers: []
	W0307 19:42:43.309763    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:43.309816    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:43.320260    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:42:43.320284    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:42:43.320290    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:42:43.338546    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:43.338558    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:43.364876    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:42:43.364886    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:42:43.380416    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:42:43.380435    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:42:43.392015    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:42:43.392028    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:42:43.408263    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:42:43.408274    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:43.420328    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:43.420347    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:43.424778    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:43.424787    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:43.531312    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:42:43.531327    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:42:43.545251    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:42:43.545264    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:42:43.556944    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:43.556954    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:43.595166    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:42:43.595176    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:42:43.609289    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:42:43.609301    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:42:43.650935    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:42:43.650960    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:42:43.662386    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:42:43.662399    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:42:43.677630    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:42:43.677642    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:42:46.191385    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:51.193339    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:51.193442    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:51.205951    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:42:51.206026    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:51.217920    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:42:51.217996    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:51.232037    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:42:51.232120    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:51.242620    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:42:51.242681    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:51.257251    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:42:51.257311    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:51.267800    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:42:51.267878    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:51.279030    4765 logs.go:276] 0 containers: []
	W0307 19:42:51.279044    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:51.279109    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:51.294488    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:42:51.294507    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:51.294513    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:51.298621    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:51.298629    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:51.336607    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:42:51.336616    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:42:51.355518    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:51.355538    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:51.381296    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:51.381314    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:51.422368    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:42:51.422379    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:42:51.436671    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:42:51.436681    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:42:51.448491    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:42:51.448504    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:42:51.466701    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:42:51.466715    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:42:51.481754    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:42:51.481764    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:42:51.493510    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:42:51.493524    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:51.507239    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:42:51.507250    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:42:51.547033    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:42:51.547049    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:42:51.562443    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:42:51.562466    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:42:51.578399    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:42:51.578413    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:42:51.601439    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:42:51.601453    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:42:54.120113    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:42:59.122445    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:42:59.122814    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:42:59.154028    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:42:59.154165    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:42:59.174276    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:42:59.174374    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:42:59.188294    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:42:59.188364    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:42:59.200141    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:42:59.200215    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:42:59.215724    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:42:59.215801    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:42:59.226462    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:42:59.226535    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:42:59.236865    4765 logs.go:276] 0 containers: []
	W0307 19:42:59.236877    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:42:59.236938    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:42:59.247392    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:42:59.247410    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:42:59.247415    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:42:59.283781    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:42:59.283789    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:42:59.299197    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:42:59.299207    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:42:59.311619    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:42:59.311628    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:42:59.323545    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:42:59.323554    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:42:59.349794    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:42:59.349806    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:42:59.354019    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:42:59.354027    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:42:59.391006    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:42:59.391018    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:42:59.429374    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:42:59.429386    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:42:59.443558    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:42:59.443569    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:42:59.460124    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:42:59.460139    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:42:59.475055    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:42:59.475070    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:42:59.488673    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:42:59.488686    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:42:59.503830    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:42:59.503841    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:42:59.519801    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:42:59.519812    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:42:59.535205    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:42:59.535216    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:02.054684    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:07.056720    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:07.056893    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:07.071116    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:07.071199    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:07.082437    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:07.082495    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:07.092782    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:07.092845    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:07.102991    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:07.103066    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:07.113435    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:07.113494    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:07.124004    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:07.124078    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:07.134118    4765 logs.go:276] 0 containers: []
	W0307 19:43:07.134128    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:07.134186    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:07.148550    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:07.148567    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:07.148573    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:07.167046    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:07.167060    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:07.178813    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:07.178823    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:07.203610    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:07.203623    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:07.215670    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:07.215685    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:07.232474    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:07.232484    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:07.243997    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:07.244008    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:07.259602    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:07.259613    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:07.296975    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:07.296990    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:07.300803    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:07.300811    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:07.315139    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:07.315151    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:07.326131    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:07.326143    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:07.341246    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:07.341257    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:07.353383    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:07.353395    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:07.368718    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:07.368729    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:07.408497    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:07.408508    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:09.949607    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:14.952089    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:14.952339    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:14.980096    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:14.980223    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:14.997146    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:14.997260    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:15.010589    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:15.010663    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:15.021703    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:15.021772    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:15.032145    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:15.032210    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:15.042986    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:15.043055    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:15.056464    4765 logs.go:276] 0 containers: []
	W0307 19:43:15.056477    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:15.056540    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:15.071150    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:15.071168    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:15.071174    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:15.107607    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:15.107619    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:15.122482    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:15.122493    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:15.147301    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:15.147310    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:15.158653    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:15.158662    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:15.196533    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:15.196544    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:15.200674    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:15.200681    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:15.216465    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:15.216477    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:15.234654    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:15.234664    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:15.248816    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:15.248830    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:15.259988    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:15.259999    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:15.273953    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:15.273964    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:15.285591    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:15.285604    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:15.326929    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:15.326944    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:15.340824    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:15.340836    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:15.355687    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:15.355701    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:17.869240    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:22.871361    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:22.871668    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:22.904766    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:22.904894    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:22.926924    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:22.927021    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:22.940668    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:22.940748    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:22.952541    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:22.952610    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:22.963276    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:22.963347    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:22.974158    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:22.974225    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:22.984768    4765 logs.go:276] 0 containers: []
	W0307 19:43:22.984780    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:22.984835    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:22.995352    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:22.995367    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:22.995374    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:22.999968    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:22.999976    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:23.036638    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:23.036649    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:23.051216    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:23.051227    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:23.065804    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:23.065816    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:23.104525    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:23.104536    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:23.116665    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:23.116679    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:23.128033    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:23.128047    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:23.151609    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:23.151618    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:23.188459    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:23.188470    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:23.207569    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:23.207586    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:23.220194    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:23.220209    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:23.238655    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:23.238665    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:23.251275    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:23.251288    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:23.266220    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:23.266235    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:23.277943    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:23.277958    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:25.795004    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:30.797227    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:30.797413    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:30.813425    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:30.813511    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:30.827152    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:30.827228    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:30.838067    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:30.838142    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:30.848808    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:30.848870    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:30.858594    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:30.858660    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:30.869159    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:30.869223    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:30.879560    4765 logs.go:276] 0 containers: []
	W0307 19:43:30.879573    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:30.879630    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:30.890153    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:30.890171    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:30.890176    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:30.929837    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:30.929846    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:30.934250    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:30.934257    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:30.946026    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:30.946037    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:30.958107    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:30.958120    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:30.969775    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:30.969786    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:30.981279    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:30.981290    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:31.002738    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:31.002750    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:31.015050    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:31.015063    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:31.051558    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:31.051569    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:31.066053    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:31.066065    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:31.079895    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:31.079905    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:31.097217    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:31.097227    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:31.122171    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:31.122180    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:31.160419    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:31.160430    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:31.174510    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:31.174522    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:33.693448    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:38.695555    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:38.695721    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:38.710076    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:38.710157    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:38.720834    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:38.720903    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:38.731380    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:38.731450    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:38.749357    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:38.749426    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:38.771917    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:38.771983    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:38.783379    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:38.783449    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:38.797742    4765 logs.go:276] 0 containers: []
	W0307 19:43:38.797754    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:38.797814    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:38.808440    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:38.808455    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:38.808463    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:38.813103    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:38.813112    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:38.826441    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:38.826456    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:38.843633    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:38.843646    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:38.856035    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:38.856047    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:38.891587    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:38.891597    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:38.903479    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:38.903491    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:38.920625    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:38.920636    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:38.935424    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:38.935434    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:38.959498    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:38.959509    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:38.970564    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:38.970576    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:38.985170    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:38.985185    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:39.000804    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:39.000817    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:39.012491    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:39.012504    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:39.051405    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:39.051412    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:39.090377    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:39.090387    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:41.608333    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:46.610387    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:46.610541    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:46.627309    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:46.627384    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:46.637789    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:46.637865    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:46.648289    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:46.648356    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:46.658638    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:46.658708    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:46.672692    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:46.672759    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:46.683175    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:46.683247    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:46.693564    4765 logs.go:276] 0 containers: []
	W0307 19:43:46.693576    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:46.693632    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:46.704333    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:46.704349    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:46.704355    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:46.739302    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:46.739315    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:46.779095    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:46.779110    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:46.794052    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:46.794064    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:46.806736    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:46.806746    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:46.821597    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:46.821608    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:46.837991    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:46.838004    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:46.852210    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:46.852221    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:46.862987    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:46.862998    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:46.883291    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:46.883304    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:46.898015    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:46.898026    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:46.937361    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:46.937370    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:46.941990    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:46.941996    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:46.959253    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:46.959265    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:46.983713    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:46.983724    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:46.998523    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:46.998533    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:49.512517    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:43:54.514657    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:43:54.514835    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:43:54.539892    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:43:54.540040    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:43:54.555934    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:43:54.556018    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:43:54.571193    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:43:54.571267    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:43:54.589229    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:43:54.589300    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:43:54.599565    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:43:54.599632    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:43:54.611188    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:43:54.611252    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:43:54.622483    4765 logs.go:276] 0 containers: []
	W0307 19:43:54.622495    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:43:54.622550    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:43:54.633577    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:43:54.633592    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:43:54.633598    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:43:54.656902    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:43:54.656913    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:43:54.694936    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:43:54.694944    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:43:54.744115    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:43:54.744129    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:43:54.759062    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:43:54.759073    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:43:54.771154    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:43:54.771170    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:43:54.791368    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:43:54.791379    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:43:54.805849    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:43:54.805860    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:43:54.826243    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:43:54.826253    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:43:54.850920    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:43:54.850927    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:43:54.862601    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:43:54.862610    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:43:54.874608    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:43:54.874619    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:43:54.878991    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:43:54.878999    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:43:54.913412    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:43:54.913423    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:43:54.928034    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:43:54.928045    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:43:54.940008    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:43:54.940017    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:43:57.457347    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:02.459446    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:02.459624    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:02.485985    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:02.486090    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:02.502789    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:02.502873    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:02.516445    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:02.516522    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:02.532890    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:02.532966    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:02.542889    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:02.542954    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:02.553765    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:02.553832    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:02.565782    4765 logs.go:276] 0 containers: []
	W0307 19:44:02.565793    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:02.565852    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:02.576299    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:02.576317    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:02.576321    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:02.587932    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:02.587943    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:02.592103    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:02.592109    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:02.628571    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:02.628581    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:02.642493    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:02.642505    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:02.654267    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:02.654277    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:02.696073    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:02.696083    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:02.713079    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:02.713091    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:02.727883    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:02.727893    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:02.752305    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:02.752317    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:02.791027    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:02.791035    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:02.805599    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:02.805611    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:02.820404    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:02.820416    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:02.831890    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:02.831900    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:02.852471    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:02.852483    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:02.874348    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:02.874365    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:05.397766    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:10.400249    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:10.400569    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:10.430288    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:10.430412    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:10.448830    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:10.448924    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:10.466808    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:10.466889    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:10.479215    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:10.479289    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:10.490415    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:10.490489    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:10.501043    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:10.501108    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:10.510761    4765 logs.go:276] 0 containers: []
	W0307 19:44:10.510773    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:10.510830    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:10.521829    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:10.521844    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:10.521849    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:10.533005    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:10.533020    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:10.545024    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:10.545038    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:10.560205    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:10.560217    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:10.577257    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:10.577269    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:10.615809    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:10.615819    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:10.651754    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:10.651767    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:10.692937    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:10.692949    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:10.707108    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:10.707121    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:10.722820    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:10.722834    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:10.734556    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:10.734566    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:10.753270    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:10.753280    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:10.767201    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:10.767213    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:10.771797    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:10.771804    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:10.785794    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:10.785806    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:10.797980    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:10.797991    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:13.324165    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:18.326343    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:18.326527    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:18.354399    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:18.354520    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:18.371244    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:18.371335    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:18.384028    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:18.384099    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:18.395436    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:18.395504    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:18.406189    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:18.406253    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:18.416881    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:18.416950    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:18.426750    4765 logs.go:276] 0 containers: []
	W0307 19:44:18.426761    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:18.426816    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:18.437121    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:18.437140    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:18.437146    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:18.477709    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:18.477721    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:18.489881    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:18.489892    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:18.508393    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:18.508403    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:18.523195    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:18.523208    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:18.547671    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:18.547681    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:18.561450    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:18.561462    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:18.576053    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:18.576065    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:18.590319    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:18.590329    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:18.606465    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:18.606479    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:18.610983    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:18.610988    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:18.625408    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:18.625418    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:18.643328    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:18.643339    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:18.680806    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:18.680815    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:18.722097    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:18.722109    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:18.734321    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:18.734332    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:21.250457    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:26.252657    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:26.252826    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:26.279386    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:26.279467    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:26.291263    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:26.291346    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:26.301544    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:26.301615    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:26.312492    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:26.312571    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:26.323102    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:26.323168    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:26.336059    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:26.336128    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:26.354107    4765 logs.go:276] 0 containers: []
	W0307 19:44:26.354122    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:26.354183    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:26.364944    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:26.364966    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:26.364972    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:26.369202    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:26.369209    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:26.380960    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:26.380969    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:26.398591    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:26.398606    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:26.410228    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:26.410239    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:26.423248    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:26.423258    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:26.463613    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:26.463623    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:26.499020    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:26.499033    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:26.517413    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:26.517426    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:26.528739    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:26.528750    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:26.540883    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:26.540895    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:26.554834    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:26.554846    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:26.573267    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:26.573278    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:26.588386    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:26.588396    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:26.610698    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:26.610705    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:26.651560    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:26.651571    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:29.168089    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:34.168698    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:34.168905    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:34.194993    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:34.195110    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:34.212057    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:34.212144    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:34.225723    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:34.225797    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:34.237124    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:34.237187    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:34.247454    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:34.247534    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:34.257776    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:34.257845    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:34.269129    4765 logs.go:276] 0 containers: []
	W0307 19:44:34.269142    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:34.269197    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:34.284033    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:34.284052    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:34.284058    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:34.299962    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:34.299975    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:34.319125    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:34.319134    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:34.333956    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:34.333969    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:34.351735    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:34.351745    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:34.365802    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:34.365812    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:34.380586    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:34.380597    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:34.395804    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:34.395814    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:34.408148    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:34.408160    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:34.443620    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:34.443631    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:34.466372    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:34.466384    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:34.478063    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:34.478076    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:34.490141    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:34.490151    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:34.527901    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:34.527910    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:34.531807    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:34.531814    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:34.569089    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:34.569101    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:37.082631    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:42.083584    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:42.083683    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:42.098328    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:42.098408    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:42.109239    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:42.109315    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:42.120130    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:42.120203    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:42.130675    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:42.130748    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:42.141114    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:42.141186    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:42.151869    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:42.151936    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:42.161730    4765 logs.go:276] 0 containers: []
	W0307 19:44:42.161742    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:42.161796    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:42.172375    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:42.172398    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:42.172404    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:42.176769    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:42.176775    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:42.190545    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:42.190558    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:42.204994    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:42.205009    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:42.216842    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:42.216854    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:42.230216    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:42.230227    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:42.265660    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:42.265670    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:42.279122    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:42.279131    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:42.294841    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:42.294853    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:42.306276    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:42.306288    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:42.323884    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:42.323894    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:42.339147    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:42.339158    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:42.378610    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:42.378623    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:42.417742    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:42.417755    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:42.434606    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:42.434616    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:42.446572    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:42.446584    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:44.971350    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:49.973531    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:49.973738    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:50.003500    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:50.003618    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:50.018834    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:50.018920    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:50.031125    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:50.031199    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:50.042149    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:50.042214    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:50.056075    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:50.056148    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:50.066707    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:50.066777    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:50.076622    4765 logs.go:276] 0 containers: []
	W0307 19:44:50.076633    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:50.076690    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:50.087385    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:50.087400    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:50.087406    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:50.124441    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:50.124450    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:50.135930    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:50.135941    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:44:50.149002    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:50.149015    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:50.183935    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:50.183949    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:50.204980    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:50.204992    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:50.220392    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:50.220403    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:50.235891    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:50.235904    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:50.250302    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:50.250315    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:50.254472    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:50.254477    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:50.275813    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:50.275826    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:50.286825    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:50.286837    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:50.301474    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:50.301487    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:50.312893    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:50.312904    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:50.350207    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:50.350217    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:50.367675    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:50.367690    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:52.893101    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:44:57.895427    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:44:57.895592    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:44:57.907139    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:44:57.907217    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:44:57.918867    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:44:57.918941    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:44:57.929281    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:44:57.929350    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:44:57.939880    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:44:57.939948    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:44:57.951509    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:44:57.951582    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:44:57.962089    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:44:57.962160    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:44:57.972393    4765 logs.go:276] 0 containers: []
	W0307 19:44:57.972406    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:44:57.972460    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:44:57.983111    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:44:57.983128    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:44:57.983134    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:44:57.994720    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:44:57.994732    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:44:58.009000    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:44:58.009010    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:44:58.026834    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:44:58.026845    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:44:58.038343    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:44:58.038354    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:44:58.058293    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:44:58.058303    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:44:58.080490    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:44:58.080499    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:44:58.117909    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:44:58.117917    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:44:58.121780    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:44:58.121787    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:44:58.137852    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:44:58.137863    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:44:58.151488    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:44:58.151499    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:44:58.166237    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:44:58.166250    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:44:58.182353    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:44:58.182364    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:44:58.194055    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:44:58.194066    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:44:58.231607    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:44:58.231618    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:44:58.268072    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:44:58.268083    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:00.781513    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:05.783847    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:05.783992    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:05.799664    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:45:05.799771    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:05.812380    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:45:05.812456    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:05.823308    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:45:05.823373    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:05.835162    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:45:05.835237    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:05.846204    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:45:05.846267    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:05.856263    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:45:05.856335    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:05.866817    4765 logs.go:276] 0 containers: []
	W0307 19:45:05.866834    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:05.866888    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:05.877132    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:45:05.877148    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:45:05.877155    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:45:05.890837    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:45:05.890851    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:45:05.905481    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:45:05.905496    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:45:05.919742    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:45:05.919753    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:45:05.931360    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:05.931373    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:05.970880    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:45:05.970888    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:45:05.985048    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:45:05.985059    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:45:05.998843    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:45:05.998856    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:45:06.013521    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:06.013533    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:06.036594    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:45:06.036606    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:45:06.053889    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:06.053900    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:06.058410    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:06.058418    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:06.093865    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:45:06.093876    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:45:06.131673    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:45:06.131691    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:45:06.144915    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:45:06.144927    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:45:06.156908    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:45:06.156920    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:08.670510    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:13.672792    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:13.673186    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:13.720616    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:45:13.720755    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:13.743971    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:45:13.744070    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:13.758482    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:45:13.758570    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:13.770057    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:45:13.770123    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:13.781503    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:45:13.781579    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:13.791749    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:45:13.791820    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:13.801699    4765 logs.go:276] 0 containers: []
	W0307 19:45:13.801709    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:13.801762    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:13.812242    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:45:13.812262    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:45:13.812267    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:45:13.827997    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:13.828014    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:13.852474    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:13.852483    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:13.890394    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:13.890403    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:13.924834    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:45:13.924848    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:45:13.938963    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:45:13.938975    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:45:13.955040    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:45:13.955051    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:45:13.970902    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:45:13.970918    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:45:13.985693    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:45:13.985705    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:45:13.999621    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:45:13.999632    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:45:14.017350    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:45:14.017360    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:45:14.029623    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:45:14.029634    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:14.041866    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:14.041877    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:14.046419    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:45:14.046426    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:45:14.084620    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:45:14.084632    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:45:14.098333    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:45:14.098344    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:45:16.613205    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:21.615306    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:21.615445    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:21.629606    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:45:21.629690    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:21.641482    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:45:21.641547    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:21.655343    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:45:21.655413    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:21.665968    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:45:21.666038    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:21.677011    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:45:21.677075    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:21.687750    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:45:21.687830    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:21.699354    4765 logs.go:276] 0 containers: []
	W0307 19:45:21.699365    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:21.699423    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:21.709995    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:45:21.710022    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:45:21.710027    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:45:21.724354    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:45:21.724368    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:45:21.738802    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:21.738813    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:21.762308    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:45:21.762320    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:45:21.777626    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:45:21.777637    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:45:21.816457    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:45:21.816473    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:45:21.828262    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:45:21.828276    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:45:21.839550    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:45:21.839562    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:45:21.854621    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:21.854635    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:21.858900    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:21.858905    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:21.894750    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:45:21.894762    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:45:21.909633    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:45:21.909647    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:45:21.926920    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:45:21.926930    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:45:21.938616    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:45:21.938626    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:21.950211    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:21.950223    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:21.988628    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:45:21.988637    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:45:24.502820    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:29.504967    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:29.505109    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:29.518612    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:45:29.518687    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:29.529297    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:45:29.529384    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:29.540259    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:45:29.540329    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:29.551428    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:45:29.551502    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:29.561935    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:45:29.562008    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:29.576773    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:45:29.576840    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:29.587555    4765 logs.go:276] 0 containers: []
	W0307 19:45:29.587567    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:29.587627    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:29.597944    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:45:29.597961    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:45:29.597967    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:45:29.613930    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:45:29.613943    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:45:29.628595    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:45:29.628620    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:45:29.646016    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:45:29.646027    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:45:29.661089    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:45:29.661103    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:45:29.672812    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:45:29.672823    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:29.687133    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:45:29.687146    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:45:29.701162    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:45:29.701173    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:45:29.738625    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:45:29.738638    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:45:29.751240    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:45:29.751250    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:45:29.762950    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:29.762962    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:29.767047    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:29.767054    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:29.806423    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:45:29.806438    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:45:29.820216    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:45:29.820225    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:45:29.833997    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:29.834013    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:29.856341    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:29.856348    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:32.397502    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:37.399992    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:37.400248    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:45:37.426926    4765 logs.go:276] 2 containers: [2ed84c59b33f 095cdd1dff64]
	I0307 19:45:37.427047    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:45:37.443767    4765 logs.go:276] 2 containers: [b3dba32692c2 31a1ca5c904b]
	I0307 19:45:37.443857    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:45:37.457518    4765 logs.go:276] 1 containers: [6e8aa377759f]
	I0307 19:45:37.457591    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:45:37.468824    4765 logs.go:276] 2 containers: [947c7ac2d918 3b6448caa1dd]
	I0307 19:45:37.468901    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:45:37.483647    4765 logs.go:276] 1 containers: [817823c647f7]
	I0307 19:45:37.483718    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:45:37.494357    4765 logs.go:276] 2 containers: [f7a7de4f5110 6572e576175a]
	I0307 19:45:37.494427    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:45:37.504877    4765 logs.go:276] 0 containers: []
	W0307 19:45:37.504892    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:45:37.504948    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:45:37.515598    4765 logs.go:276] 1 containers: [47847237a18e]
	I0307 19:45:37.515616    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:45:37.515621    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:45:37.554692    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:45:37.554704    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:45:37.558995    4765 logs.go:123] Gathering logs for kube-apiserver [095cdd1dff64] ...
	I0307 19:45:37.559002    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095cdd1dff64"
	I0307 19:45:37.599649    4765 logs.go:123] Gathering logs for kube-scheduler [3b6448caa1dd] ...
	I0307 19:45:37.599661    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6448caa1dd"
	I0307 19:45:37.615151    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:45:37.615162    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:45:37.627096    4765 logs.go:123] Gathering logs for etcd [b3dba32692c2] ...
	I0307 19:45:37.627106    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dba32692c2"
	I0307 19:45:37.640775    4765 logs.go:123] Gathering logs for kube-controller-manager [6572e576175a] ...
	I0307 19:45:37.640786    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6572e576175a"
	I0307 19:45:37.655896    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:45:37.655906    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:45:37.679634    4765 logs.go:123] Gathering logs for kube-apiserver [2ed84c59b33f] ...
	I0307 19:45:37.679643    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed84c59b33f"
	I0307 19:45:37.693660    4765 logs.go:123] Gathering logs for etcd [31a1ca5c904b] ...
	I0307 19:45:37.693671    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a1ca5c904b"
	I0307 19:45:37.707680    4765 logs.go:123] Gathering logs for coredns [6e8aa377759f] ...
	I0307 19:45:37.707691    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8aa377759f"
	I0307 19:45:37.718928    4765 logs.go:123] Gathering logs for kube-controller-manager [f7a7de4f5110] ...
	I0307 19:45:37.718942    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a7de4f5110"
	I0307 19:45:37.735930    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:45:37.735941    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:45:37.773608    4765 logs.go:123] Gathering logs for kube-scheduler [947c7ac2d918] ...
	I0307 19:45:37.773619    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947c7ac2d918"
	I0307 19:45:37.785633    4765 logs.go:123] Gathering logs for kube-proxy [817823c647f7] ...
	I0307 19:45:37.785644    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 817823c647f7"
	I0307 19:45:37.797707    4765 logs.go:123] Gathering logs for storage-provisioner [47847237a18e] ...
	I0307 19:45:37.797718    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47847237a18e"
	I0307 19:45:40.310773    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:45.313250    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:45.313376    4765 kubeadm.go:591] duration metric: took 4m3.784817209s to restartPrimaryControlPlane
	W0307 19:45:45.313452    4765 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0307 19:45:45.313487    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0307 19:45:46.350289    4765 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.036805708s)
	I0307 19:45:46.350361    4765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 19:45:46.355530    4765 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 19:45:46.358501    4765 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 19:45:46.361134    4765 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 19:45:46.361140    4765 kubeadm.go:156] found existing configuration files:
	
	I0307 19:45:46.361163    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/admin.conf
	I0307 19:45:46.363458    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 19:45:46.363478    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 19:45:46.366342    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/kubelet.conf
	I0307 19:45:46.368837    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 19:45:46.368864    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 19:45:46.371415    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/controller-manager.conf
	I0307 19:45:46.374596    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 19:45:46.374616    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 19:45:46.377559    4765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/scheduler.conf
	I0307 19:45:46.380110    4765 kubeadm.go:162] "https://control-plane.minikube.internal:50510" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50510 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 19:45:46.380133    4765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 19:45:46.383185    4765 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 19:45:46.400841    4765 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0307 19:45:46.400871    4765 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 19:45:46.449133    4765 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 19:45:46.449187    4765 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 19:45:46.449229    4765 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 19:45:46.510220    4765 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 19:45:46.514450    4765 out.go:204]   - Generating certificates and keys ...
	I0307 19:45:46.514483    4765 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 19:45:46.514509    4765 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 19:45:46.514553    4765 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 19:45:46.514585    4765 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0307 19:45:46.514622    4765 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 19:45:46.514660    4765 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0307 19:45:46.514699    4765 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0307 19:45:46.514750    4765 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0307 19:45:46.514801    4765 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 19:45:46.514837    4765 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 19:45:46.514856    4765 kubeadm.go:309] [certs] Using the existing "sa" key
	I0307 19:45:46.514886    4765 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 19:45:46.670966    4765 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 19:45:46.946212    4765 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 19:45:47.085537    4765 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 19:45:47.142160    4765 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 19:45:47.173018    4765 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 19:45:47.173475    4765 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 19:45:47.173498    4765 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 19:45:47.252696    4765 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 19:45:47.256604    4765 out.go:204]   - Booting up control plane ...
	I0307 19:45:47.256654    4765 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 19:45:47.256695    4765 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 19:45:47.256734    4765 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 19:45:47.256795    4765 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 19:45:47.261586    4765 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 19:45:51.763947    4765 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501498 seconds
	I0307 19:45:51.764056    4765 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 19:45:51.768069    4765 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 19:45:52.279053    4765 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 19:45:52.279236    4765 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-126000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 19:45:52.782658    4765 kubeadm.go:309] [bootstrap-token] Using token: es2efn.kgkj8j6c0xom9oxf
	I0307 19:45:52.789321    4765 out.go:204]   - Configuring RBAC rules ...
	I0307 19:45:52.789389    4765 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 19:45:52.796167    4765 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 19:45:52.798168    4765 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 19:45:52.798902    4765 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 19:45:52.799732    4765 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 19:45:52.800557    4765 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 19:45:52.803710    4765 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 19:45:52.985816    4765 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 19:45:53.199015    4765 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 19:45:53.199667    4765 kubeadm.go:309] 
	I0307 19:45:53.199698    4765 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 19:45:53.199700    4765 kubeadm.go:309] 
	I0307 19:45:53.199736    4765 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 19:45:53.199740    4765 kubeadm.go:309] 
	I0307 19:45:53.199763    4765 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 19:45:53.199813    4765 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 19:45:53.199876    4765 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 19:45:53.199911    4765 kubeadm.go:309] 
	I0307 19:45:53.199961    4765 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 19:45:53.199969    4765 kubeadm.go:309] 
	I0307 19:45:53.199992    4765 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 19:45:53.199995    4765 kubeadm.go:309] 
	I0307 19:45:53.200026    4765 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 19:45:53.200073    4765 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 19:45:53.200173    4765 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 19:45:53.200178    4765 kubeadm.go:309] 
	I0307 19:45:53.200253    4765 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 19:45:53.200319    4765 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 19:45:53.200324    4765 kubeadm.go:309] 
	I0307 19:45:53.200425    4765 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token es2efn.kgkj8j6c0xom9oxf \
	I0307 19:45:53.200478    4765 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8b467e6a6e778a0a4b26f2d605aedb7acbb7b86477eb1b7d8cc5affd5ffec0d5 \
	I0307 19:45:53.200493    4765 kubeadm.go:309] 	--control-plane 
	I0307 19:45:53.200495    4765 kubeadm.go:309] 
	I0307 19:45:53.200536    4765 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 19:45:53.200541    4765 kubeadm.go:309] 
	I0307 19:45:53.200580    4765 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token es2efn.kgkj8j6c0xom9oxf \
	I0307 19:45:53.200685    4765 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8b467e6a6e778a0a4b26f2d605aedb7acbb7b86477eb1b7d8cc5affd5ffec0d5 
	I0307 19:45:53.200743    4765 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 19:45:53.200748    4765 cni.go:84] Creating CNI manager for ""
	I0307 19:45:53.200756    4765 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:45:53.206539    4765 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 19:45:53.216535    4765 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 19:45:53.220013    4765 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 19:45:53.225043    4765 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 19:45:53.225096    4765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 19:45:53.225121    4765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-126000 minikube.k8s.io/updated_at=2024_03_07T19_45_53_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=stopped-upgrade-126000 minikube.k8s.io/primary=true
	I0307 19:45:53.268327    4765 kubeadm.go:1106] duration metric: took 43.278042ms to wait for elevateKubeSystemPrivileges
	I0307 19:45:53.268340    4765 ops.go:34] apiserver oom_adj: -16
	W0307 19:45:53.268419    4765 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 19:45:53.268428    4765 kubeadm.go:393] duration metric: took 4m11.753652s to StartCluster
	I0307 19:45:53.268438    4765 settings.go:142] acquiring lock: {Name:mka91134012bc21ec54a241fdaa124189f2aed4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:45:53.268507    4765 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:45:53.268910    4765 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/kubeconfig: {Name:mk4dcca67acc40e2ef9a6fcc3838689fa74c4a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:45:53.269271    4765 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:45:53.273509    4765 out.go:177] * Verifying Kubernetes components...
	I0307 19:45:53.269278    4765 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 19:45:53.269343    4765 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:45:53.280490    4765 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-126000"
	I0307 19:45:53.280506    4765 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-126000"
	W0307 19:45:53.280513    4765 addons.go:243] addon storage-provisioner should already be in state true
	I0307 19:45:53.280524    4765 host.go:66] Checking if "stopped-upgrade-126000" exists ...
	I0307 19:45:53.280532    4765 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-126000"
	I0307 19:45:53.280544    4765 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-126000"
	I0307 19:45:53.280508    4765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 19:45:53.281738    4765 kapi.go:59] client config for stopped-upgrade-126000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/stopped-upgrade-126000/client.key", CAFile:"/Users/jenkins/minikube-integration/18333-1199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1037a76a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 19:45:53.281854    4765 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-126000"
	W0307 19:45:53.281858    4765 addons.go:243] addon default-storageclass should already be in state true
	I0307 19:45:53.281866    4765 host.go:66] Checking if "stopped-upgrade-126000" exists ...
	I0307 19:45:53.286487    4765 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 19:45:53.290455    4765 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:45:53.290461    4765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 19:45:53.290467    4765 sshutil.go:53] new ssh client: &{IP:localhost Port:50475 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0307 19:45:53.291134    4765 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 19:45:53.291138    4765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 19:45:53.291142    4765 sshutil.go:53] new ssh client: &{IP:localhost Port:50475 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0307 19:45:53.374479    4765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 19:45:53.379960    4765 api_server.go:52] waiting for apiserver process to appear ...
	I0307 19:45:53.380000    4765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 19:45:53.384163    4765 api_server.go:72] duration metric: took 114.88475ms to wait for apiserver process to appear ...
	I0307 19:45:53.384171    4765 api_server.go:88] waiting for apiserver healthz status ...
	I0307 19:45:53.384178    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:45:53.414677    4765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 19:45:53.421553    4765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 19:45:58.386042    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:45:58.386069    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:03.386041    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:03.386062    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:08.386119    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:08.386143    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:13.386223    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:13.386247    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:18.386435    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:18.386478    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:23.386900    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:23.386932    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0307 19:46:23.763325    4765 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0307 19:46:23.766731    4765 out.go:177] * Enabled addons: storage-provisioner
	I0307 19:46:23.778622    4765 addons.go:505] duration metric: took 30.510592875s for enable addons: enabled=[storage-provisioner]
	I0307 19:46:28.387297    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:28.387401    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:33.388454    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:33.388479    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:38.388788    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:38.388817    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:43.389966    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:43.390711    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:48.392424    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:48.392472    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:46:53.394562    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:46:53.394798    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:46:53.416444    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:46:53.416525    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:46:53.446717    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:46:53.446784    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:46:53.457868    4765 logs.go:276] 2 containers: [28a42930fc0b cfa4a84ca3c0]
	I0307 19:46:53.457929    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:46:53.469274    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:46:53.469347    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:46:53.479596    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:46:53.479670    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:46:53.495569    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:46:53.495636    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:46:53.505791    4765 logs.go:276] 0 containers: []
	W0307 19:46:53.505808    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:46:53.505864    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:46:53.516345    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:46:53.516362    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:46:53.516368    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:46:53.550982    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:46:53.550994    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:46:53.586398    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:46:53.586410    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:46:53.598199    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:46:53.598214    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:46:53.609709    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:46:53.609721    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:46:53.634829    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:46:53.634837    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:46:53.647518    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:46:53.647533    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:46:53.666901    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:46:53.666918    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:46:53.671699    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:46:53.671711    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:46:53.687905    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:46:53.687918    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:46:53.707736    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:46:53.707749    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:46:53.721654    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:46:53.721669    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:46:53.735421    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:46:53.735434    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:46:56.258326    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:47:01.261020    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:47:01.261242    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:47:01.283327    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:47:01.283432    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:47:01.298635    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:47:01.298718    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:47:01.311294    4765 logs.go:276] 2 containers: [28a42930fc0b cfa4a84ca3c0]
	I0307 19:47:01.311367    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:47:01.322422    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:47:01.322490    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:47:01.332971    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:47:01.333047    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:47:01.343255    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:47:01.343325    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:47:01.353961    4765 logs.go:276] 0 containers: []
	W0307 19:47:01.353972    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:47:01.354028    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:47:01.363948    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:47:01.363967    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:47:01.363973    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:47:01.369200    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:47:01.369210    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:47:01.403858    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:47:01.403874    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:47:01.419546    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:47:01.419559    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:47:01.433548    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:47:01.433560    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:47:01.445419    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:47:01.445429    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:47:01.457674    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:47:01.457687    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:47:01.482077    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:47:01.482089    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:47:01.494480    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:47:01.494492    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:47:01.530164    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:47:01.530178    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:47:01.549445    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:47:01.549457    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:47:01.561708    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:47:01.561720    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:47:01.576117    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:47:01.576127    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:47:04.095693    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:47:09.098020    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:47:09.098098    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:47:09.111183    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:47:09.111255    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:47:09.121320    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:47:09.121385    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:47:09.132523    4765 logs.go:276] 2 containers: [28a42930fc0b cfa4a84ca3c0]
	I0307 19:47:09.132593    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:47:09.143589    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:47:09.143664    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:47:09.154455    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:47:09.154524    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:47:09.165192    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:47:09.165257    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:47:09.175135    4765 logs.go:276] 0 containers: []
	W0307 19:47:09.175144    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:47:09.175203    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:47:09.186920    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:47:09.186937    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:47:09.186942    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:47:09.198554    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:47:09.198569    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:47:09.233115    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:47:09.233127    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:47:09.267973    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:47:09.267986    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:47:09.280201    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:47:09.280215    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:47:09.303129    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:47:09.303140    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:47:09.321273    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:47:09.321284    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:47:09.346026    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:47:09.346034    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:47:09.350185    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:47:09.350191    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:47:09.364728    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:47:09.364739    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:47:09.380090    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:47:09.380100    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:47:09.391885    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:47:09.391898    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:47:09.404668    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:47:09.404683    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:47:11.921133    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:47:16.923338    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:47:16.923480    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:47:16.938479    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:47:16.938551    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:47:16.950560    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:47:16.950639    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:47:16.960848    4765 logs.go:276] 2 containers: [28a42930fc0b cfa4a84ca3c0]
	I0307 19:47:16.960926    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:47:16.972351    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:47:16.972421    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:47:16.983550    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:47:16.983628    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:47:16.993534    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:47:16.993595    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:47:17.003977    4765 logs.go:276] 0 containers: []
	W0307 19:47:17.003987    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:47:17.004044    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:47:17.014566    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:47:17.014582    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:47:17.014587    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:47:17.029059    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:47:17.029069    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:47:17.042868    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:47:17.042879    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:47:17.066072    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:47:17.066082    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:47:17.077252    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:47:17.077267    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:47:17.112421    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:47:17.112429    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:47:17.117072    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:47:17.117082    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:47:17.153903    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:47:17.153915    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:47:17.165821    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:47:17.165833    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:47:17.186756    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:47:17.186768    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:47:17.199229    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:47:17.199241    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:47:17.215235    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:47:17.215246    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:47:17.234967    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:47:17.234980    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:47:19.751349    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:47:24.753323    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:47:24.753403    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:47:24.764998    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:47:24.765070    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:47:24.776649    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:47:24.776728    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:47:24.791027    4765 logs.go:276] 2 containers: [28a42930fc0b cfa4a84ca3c0]
	I0307 19:47:24.791114    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:47:24.802366    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:47:24.802433    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:47:24.813631    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:47:24.813703    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:47:24.825286    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:47:24.825351    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:47:24.836633    4765 logs.go:276] 0 containers: []
	W0307 19:47:24.836648    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:47:24.836707    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:47:24.848383    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:47:24.848399    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:47:24.848403    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:47:24.864335    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:47:24.864346    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:47:24.882513    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:47:24.882528    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:47:24.916993    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:47:24.917007    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:47:24.921908    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:47:24.921921    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:47:24.959141    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:47:24.959154    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:47:24.974022    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:47:24.974045    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:47:24.991860    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:47:24.991876    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:47:25.003683    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:47:25.003695    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:47:25.015799    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:47:25.015809    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:47:25.041004    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:47:25.041016    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:47:25.052408    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:47:25.052421    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:47:25.065878    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:47:25.065890    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:47:27.579718    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:47:32.582180    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:47:32.582259    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:47:32.595131    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:47:32.595197    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:47:32.606107    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:47:32.606177    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:47:32.616076    4765 logs.go:276] 2 containers: [28a42930fc0b cfa4a84ca3c0]
	I0307 19:47:32.616138    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:47:32.626341    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:47:32.626404    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:47:32.636848    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:47:32.636909    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:47:32.650099    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:47:32.650166    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:47:32.660373    4765 logs.go:276] 0 containers: []
	W0307 19:47:32.660386    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:47:32.660434    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:47:32.671337    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:47:32.671351    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:47:32.671357    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:47:32.685719    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:47:32.685731    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:47:32.696918    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:47:32.696931    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:47:32.708092    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:47:32.708105    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:47:32.723895    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:47:32.723906    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:47:32.735328    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:47:32.735338    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:47:32.746716    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:47:32.746729    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:47:32.783689    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:47:32.783702    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:47:32.798257    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:47:32.798269    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:47:32.809899    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:47:32.809914    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:47:32.827184    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:47:32.827196    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:47:32.850410    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:47:32.850417    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:47:32.883458    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:47:32.883469    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:47:35.389309    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:47:40.390246    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:47:40.390489    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:47:40.419933    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:47:40.420049    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:47:40.438371    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:47:40.438458    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:47:40.452583    4765 logs.go:276] 2 containers: [28a42930fc0b cfa4a84ca3c0]
	I0307 19:47:40.452645    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:47:40.463818    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:47:40.463884    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:47:40.482493    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:47:40.482557    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:47:40.492760    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:47:40.492818    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:47:40.503296    4765 logs.go:276] 0 containers: []
	W0307 19:47:40.503306    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:47:40.503361    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:47:40.514198    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:47:40.514213    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:47:40.514218    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:47:40.527354    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:47:40.527366    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:47:40.538827    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:47:40.538837    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:47:40.550675    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:47:40.550686    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:47:40.561704    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:47:40.561718    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:47:40.587112    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:47:40.587125    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:47:40.591738    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:47:40.591747    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:47:40.625410    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:47:40.625424    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:47:40.639905    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:47:40.639915    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:47:40.651145    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:47:40.651156    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:47:40.662745    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:47:40.662757    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:47:40.697647    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:47:40.697654    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:47:40.712505    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:47:40.712514    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:47:43.238683    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:47:48.239857    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:47:48.240185    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:47:48.270310    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:47:48.270431    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:47:48.289997    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:47:48.290084    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:47:48.304401    4765 logs.go:276] 2 containers: [28a42930fc0b cfa4a84ca3c0]
	I0307 19:47:48.304480    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:47:48.316391    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:47:48.316460    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:47:48.326967    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:47:48.327030    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:47:48.337204    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:47:48.337265    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:47:48.347390    4765 logs.go:276] 0 containers: []
	W0307 19:47:48.347402    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:47:48.347456    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:47:48.358002    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:47:48.358017    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:47:48.358023    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:47:48.393471    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:47:48.393484    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:47:48.405430    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:47:48.405445    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:47:48.416633    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:47:48.416644    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:47:48.428005    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:47:48.428018    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:47:48.446094    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:47:48.446104    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:47:48.457538    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:47:48.457550    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:47:48.461738    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:47:48.461747    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:47:48.495075    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:47:48.495091    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:47:48.509180    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:47:48.509193    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:47:48.522871    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:47:48.522881    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:47:48.541945    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:47:48.541957    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:47:48.564746    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:47:48.564753    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:47:51.077664    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:47:56.079693    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:47:56.079910    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:47:56.103149    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:47:56.103263    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:47:56.118547    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:47:56.118629    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:47:56.131370    4765 logs.go:276] 2 containers: [28a42930fc0b cfa4a84ca3c0]
	I0307 19:47:56.131440    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:47:56.142057    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:47:56.142124    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:47:56.152277    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:47:56.152341    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:47:56.162919    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:47:56.162985    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:47:56.173160    4765 logs.go:276] 0 containers: []
	W0307 19:47:56.173171    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:47:56.173222    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:47:56.183390    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:47:56.183405    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:47:56.183410    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:47:56.199101    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:47:56.199112    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:47:56.218281    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:47:56.218291    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:47:56.235505    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:47:56.235517    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:47:56.259008    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:47:56.259015    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:47:56.270303    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:47:56.270315    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:47:56.304152    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:47:56.304159    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:47:56.321479    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:47:56.321491    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:47:56.332749    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:47:56.332760    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:47:56.344454    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:47:56.344466    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:47:56.355778    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:47:56.355787    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:47:56.360010    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:47:56.360018    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:47:56.395475    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:47:56.395488    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:47:58.911344    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:48:03.913941    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:48:03.914361    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:48:03.954587    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:48:03.954725    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:48:03.980406    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:48:03.980499    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:48:03.994677    4765 logs.go:276] 2 containers: [28a42930fc0b cfa4a84ca3c0]
	I0307 19:48:03.994755    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:48:04.011694    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:48:04.011762    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:48:04.022452    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:48:04.022527    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:48:04.033174    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:48:04.033235    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:48:04.042993    4765 logs.go:276] 0 containers: []
	W0307 19:48:04.043004    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:48:04.043055    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:48:04.060075    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:48:04.060093    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:48:04.060098    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:48:04.071301    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:48:04.071313    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:48:04.075562    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:48:04.075570    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:48:04.108293    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:48:04.108306    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:48:04.119759    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:48:04.119770    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:48:04.134987    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:48:04.135000    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:48:04.146476    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:48:04.146485    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:48:04.163878    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:48:04.163887    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:48:04.186908    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:48:04.186916    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:48:04.220292    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:48:04.220299    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:48:04.234204    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:48:04.234218    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:48:04.254363    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:48:04.254372    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:48:04.265538    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:48:04.265551    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:48:06.779047    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:48:11.781387    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:48:11.781843    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:48:11.820437    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:48:11.820565    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:48:11.842632    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:48:11.842741    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:48:11.857542    4765 logs.go:276] 4 containers: [3514da73ee37 3e74a2cef80b 28a42930fc0b cfa4a84ca3c0]
	I0307 19:48:11.857615    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:48:11.870816    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:48:11.870879    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:48:11.883278    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:48:11.883339    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:48:11.894208    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:48:11.894290    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:48:11.904130    4765 logs.go:276] 0 containers: []
	W0307 19:48:11.904144    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:48:11.904201    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:48:11.914803    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:48:11.914823    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:48:11.914830    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:48:11.918967    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:48:11.918975    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:48:11.936536    4765 logs.go:123] Gathering logs for coredns [3e74a2cef80b] ...
	I0307 19:48:11.936549    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e74a2cef80b"
	I0307 19:48:11.947531    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:48:11.947543    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:48:11.959571    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:48:11.959586    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:48:11.975383    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:48:11.975396    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:48:12.008583    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:48:12.008591    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:48:12.044391    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:48:12.044404    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:48:12.059698    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:48:12.059712    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:48:12.075041    4765 logs.go:123] Gathering logs for coredns [3514da73ee37] ...
	I0307 19:48:12.075050    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3514da73ee37"
	I0307 19:48:12.091148    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:48:12.091160    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:48:12.107680    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:48:12.107693    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:48:12.119518    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:48:12.119529    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:48:12.134185    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:48:12.134195    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:48:12.157625    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:48:12.157634    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:48:14.669573    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:48:19.671506    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:48:19.671941    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:48:19.711976    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:48:19.712106    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:48:19.734191    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:48:19.734311    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:48:19.749382    4765 logs.go:276] 4 containers: [3514da73ee37 3e74a2cef80b 28a42930fc0b cfa4a84ca3c0]
	I0307 19:48:19.749453    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:48:19.763117    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:48:19.763188    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:48:19.778206    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:48:19.778275    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:48:19.788748    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:48:19.788811    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:48:19.799354    4765 logs.go:276] 0 containers: []
	W0307 19:48:19.799365    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:48:19.799420    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:48:19.809929    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:48:19.809944    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:48:19.809948    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:48:19.822268    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:48:19.822281    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:48:19.863863    4765 logs.go:123] Gathering logs for coredns [3514da73ee37] ...
	I0307 19:48:19.863874    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3514da73ee37"
	I0307 19:48:19.875617    4765 logs.go:123] Gathering logs for coredns [3e74a2cef80b] ...
	I0307 19:48:19.875630    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e74a2cef80b"
	I0307 19:48:19.887114    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:48:19.887127    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:48:19.906158    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:48:19.906167    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:48:19.925320    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:48:19.925331    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:48:19.936353    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:48:19.936367    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:48:19.953726    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:48:19.953735    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:48:19.977378    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:48:19.977384    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:48:20.010797    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:48:20.010804    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:48:20.014926    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:48:20.014936    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:48:20.030029    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:48:20.030038    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:48:20.042015    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:48:20.042024    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:48:20.054210    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:48:20.054221    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:48:22.568808    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:48:27.570082    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:48:27.570565    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:48:27.611103    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:48:27.611226    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:48:27.632571    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:48:27.632673    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:48:27.647780    4765 logs.go:276] 4 containers: [3514da73ee37 3e74a2cef80b 28a42930fc0b cfa4a84ca3c0]
	I0307 19:48:27.647871    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:48:27.659970    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:48:27.660039    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:48:27.670413    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:48:27.670474    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:48:27.681072    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:48:27.681126    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:48:27.691042    4765 logs.go:276] 0 containers: []
	W0307 19:48:27.691053    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:48:27.691108    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:48:27.701408    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:48:27.701426    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:48:27.701431    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:48:27.715875    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:48:27.715888    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:48:27.727265    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:48:27.727275    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:48:27.739505    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:48:27.739514    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:48:27.750932    4765 logs.go:123] Gathering logs for coredns [3514da73ee37] ...
	I0307 19:48:27.750945    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3514da73ee37"
	I0307 19:48:27.762625    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:48:27.762640    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:48:27.774475    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:48:27.774489    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:48:27.795855    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:48:27.795866    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:48:27.807733    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:48:27.807742    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:48:27.841290    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:48:27.841301    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:48:27.845675    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:48:27.845682    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:48:27.880600    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:48:27.880611    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:48:27.898202    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:48:27.898213    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:48:27.917073    4765 logs.go:123] Gathering logs for coredns [3e74a2cef80b] ...
	I0307 19:48:27.917084    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e74a2cef80b"
	I0307 19:48:27.928240    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:48:27.928250    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:48:30.454176    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:48:35.454798    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:48:35.455229    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:48:35.491116    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:48:35.491239    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:48:35.511855    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:48:35.511974    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:48:35.527237    4765 logs.go:276] 4 containers: [3514da73ee37 3e74a2cef80b 28a42930fc0b cfa4a84ca3c0]
	I0307 19:48:35.527311    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:48:35.539376    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:48:35.539452    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:48:35.550287    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:48:35.550345    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:48:35.560179    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:48:35.560244    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:48:35.570687    4765 logs.go:276] 0 containers: []
	W0307 19:48:35.570699    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:48:35.570758    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:48:35.580878    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:48:35.580895    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:48:35.580901    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:48:35.615011    4765 logs.go:123] Gathering logs for coredns [3e74a2cef80b] ...
	I0307 19:48:35.615022    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e74a2cef80b"
	I0307 19:48:35.626363    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:48:35.626374    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:48:35.643786    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:48:35.643796    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:48:35.668823    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:48:35.668831    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:48:35.683199    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:48:35.683210    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:48:35.695568    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:48:35.695580    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:48:35.712580    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:48:35.712590    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:48:35.727382    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:48:35.727394    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:48:35.763105    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:48:35.763114    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:48:35.767315    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:48:35.767322    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:48:35.782162    4765 logs.go:123] Gathering logs for coredns [3514da73ee37] ...
	I0307 19:48:35.782174    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3514da73ee37"
	I0307 19:48:35.802872    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:48:35.802887    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:48:35.815091    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:48:35.815103    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:48:35.826234    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:48:35.826246    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:48:38.341046    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:48:43.343113    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:48:43.343513    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:48:43.388525    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:48:43.388634    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:48:43.405325    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:48:43.405409    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:48:43.418771    4765 logs.go:276] 4 containers: [3514da73ee37 3e74a2cef80b 28a42930fc0b cfa4a84ca3c0]
	I0307 19:48:43.418840    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:48:43.431561    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:48:43.431630    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:48:43.442903    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:48:43.442967    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:48:43.453282    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:48:43.453343    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:48:43.463855    4765 logs.go:276] 0 containers: []
	W0307 19:48:43.463866    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:48:43.463920    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:48:43.476684    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:48:43.476702    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:48:43.476708    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:48:43.488548    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:48:43.488558    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:48:43.505690    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:48:43.505699    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:48:43.539986    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:48:43.539997    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:48:43.554576    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:48:43.554587    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:48:43.568671    4765 logs.go:123] Gathering logs for coredns [3e74a2cef80b] ...
	I0307 19:48:43.568682    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e74a2cef80b"
	I0307 19:48:43.580716    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:48:43.580730    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:48:43.595066    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:48:43.595074    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:48:43.606903    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:48:43.606912    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:48:43.641865    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:48:43.641873    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:48:43.646213    4765 logs.go:123] Gathering logs for coredns [3514da73ee37] ...
	I0307 19:48:43.646220    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3514da73ee37"
	I0307 19:48:43.661410    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:48:43.661420    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:48:43.672642    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:48:43.672656    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:48:43.684154    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:48:43.684166    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:48:43.696102    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:48:43.696112    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:48:46.221498    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:48:51.222823    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:48:51.222936    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:48:51.240110    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:48:51.240186    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:48:51.253293    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:48:51.253365    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:48:51.265117    4765 logs.go:276] 4 containers: [3514da73ee37 3e74a2cef80b 28a42930fc0b cfa4a84ca3c0]
	I0307 19:48:51.265185    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:48:51.275152    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:48:51.275220    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:48:51.285634    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:48:51.285697    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:48:51.296021    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:48:51.296085    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:48:51.309280    4765 logs.go:276] 0 containers: []
	W0307 19:48:51.309293    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:48:51.309348    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:48:51.319671    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:48:51.319689    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:48:51.319695    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:48:51.366463    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:48:51.366477    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:48:51.390118    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:48:51.390127    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:48:51.404105    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:48:51.404117    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:48:51.418583    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:48:51.418595    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:48:51.430404    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:48:51.430416    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:48:51.468670    4765 logs.go:123] Gathering logs for coredns [3514da73ee37] ...
	I0307 19:48:51.468681    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3514da73ee37"
	I0307 19:48:51.482196    4765 logs.go:123] Gathering logs for coredns [3e74a2cef80b] ...
	I0307 19:48:51.482208    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e74a2cef80b"
	I0307 19:48:51.497621    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:48:51.497634    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:48:51.508938    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:48:51.508948    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:48:51.532311    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:48:51.532326    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:48:51.536763    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:48:51.536768    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:48:51.548029    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:48:51.548040    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:48:51.559561    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:48:51.559572    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:48:51.576328    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:48:51.576338    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:48:54.089467    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:48:59.091641    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:48:59.091874    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:48:59.118816    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:48:59.118933    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:48:59.136203    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:48:59.136302    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:48:59.150470    4765 logs.go:276] 4 containers: [3514da73ee37 3e74a2cef80b 28a42930fc0b cfa4a84ca3c0]
	I0307 19:48:59.150542    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:48:59.162249    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:48:59.162311    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:48:59.172938    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:48:59.173001    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:48:59.182925    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:48:59.182991    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:48:59.193174    4765 logs.go:276] 0 containers: []
	W0307 19:48:59.193184    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:48:59.193234    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:48:59.208288    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:48:59.208306    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:48:59.208311    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:48:59.222681    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:48:59.222691    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:48:59.237044    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:48:59.237054    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:48:59.249821    4765 logs.go:123] Gathering logs for coredns [3514da73ee37] ...
	I0307 19:48:59.249836    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3514da73ee37"
	I0307 19:48:59.261673    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:48:59.261685    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:48:59.273372    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:48:59.273383    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:48:59.291097    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:48:59.291108    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:48:59.304739    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:48:59.304748    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:48:59.309508    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:48:59.309517    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:48:59.321418    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:48:59.321430    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:48:59.345079    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:48:59.345085    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:48:59.356631    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:48:59.356644    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:48:59.391380    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:48:59.391389    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:48:59.424759    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:48:59.424772    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:48:59.439450    4765 logs.go:123] Gathering logs for coredns [3e74a2cef80b] ...
	I0307 19:48:59.439460    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e74a2cef80b"
	I0307 19:49:01.952941    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:49:06.955009    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:49:06.955382    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:49:06.991952    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:49:06.992070    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:49:07.012058    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:49:07.012153    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:49:07.026966    4765 logs.go:276] 4 containers: [3514da73ee37 3e74a2cef80b 28a42930fc0b cfa4a84ca3c0]
	I0307 19:49:07.027041    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:49:07.039403    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:49:07.039471    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:49:07.051473    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:49:07.051534    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:49:07.062241    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:49:07.062295    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:49:07.072472    4765 logs.go:276] 0 containers: []
	W0307 19:49:07.072482    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:49:07.072527    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:49:07.082982    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:49:07.082997    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:49:07.083001    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:49:07.099010    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:49:07.099024    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:49:07.116279    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:49:07.116293    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:49:07.140192    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:49:07.140200    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:49:07.154699    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:49:07.154711    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:49:07.169866    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:49:07.169877    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:49:07.181659    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:49:07.181671    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:49:07.186169    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:49:07.186179    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:49:07.219913    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:49:07.219924    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:49:07.233240    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:49:07.233252    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:49:07.244753    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:49:07.244765    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:49:07.256470    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:49:07.256482    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:49:07.291276    4765 logs.go:123] Gathering logs for coredns [3514da73ee37] ...
	I0307 19:49:07.291285    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3514da73ee37"
	I0307 19:49:07.302933    4765 logs.go:123] Gathering logs for coredns [3e74a2cef80b] ...
	I0307 19:49:07.302944    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e74a2cef80b"
	I0307 19:49:07.314502    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:49:07.314514    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:49:09.831455    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:49:14.833946    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:49:14.834032    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:49:14.850686    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:49:14.850746    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:49:14.862955    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:49:14.863027    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:49:14.875093    4765 logs.go:276] 4 containers: [3514da73ee37 3e74a2cef80b 28a42930fc0b cfa4a84ca3c0]
	I0307 19:49:14.875159    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:49:14.886587    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:49:14.886643    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:49:14.897586    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:49:14.897657    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:49:14.909608    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:49:14.909671    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:49:14.920909    4765 logs.go:276] 0 containers: []
	W0307 19:49:14.920922    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:49:14.920964    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:49:14.931725    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:49:14.931740    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:49:14.931744    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:49:14.944378    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:49:14.944392    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:49:14.958008    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:49:14.958020    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:49:14.971747    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:49:14.971755    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:49:14.975878    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:49:14.975884    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:49:14.994607    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:49:14.994618    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:49:15.030340    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:49:15.030356    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:49:15.078563    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:49:15.078576    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:49:15.093361    4765 logs.go:123] Gathering logs for coredns [3e74a2cef80b] ...
	I0307 19:49:15.093373    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e74a2cef80b"
	I0307 19:49:15.105484    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:49:15.105495    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:49:15.119674    4765 logs.go:123] Gathering logs for coredns [3514da73ee37] ...
	I0307 19:49:15.119683    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3514da73ee37"
	I0307 19:49:15.140499    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:49:15.140510    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:49:15.157595    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:49:15.157610    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:49:15.170379    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:49:15.170391    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:49:15.182535    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:49:15.182546    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:49:17.709705    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:49:22.711728    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:49:22.711848    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:49:22.724648    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:49:22.724719    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:49:22.737127    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:49:22.737199    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:49:22.751009    4765 logs.go:276] 4 containers: [3514da73ee37 3e74a2cef80b 28a42930fc0b cfa4a84ca3c0]
	I0307 19:49:22.751088    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:49:22.764204    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:49:22.764278    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:49:22.781087    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:49:22.781161    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:49:22.792266    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:49:22.792329    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:49:22.803182    4765 logs.go:276] 0 containers: []
	W0307 19:49:22.803193    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:49:22.803248    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:49:22.813937    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:49:22.813955    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:49:22.813960    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:49:22.831455    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:49:22.831467    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:49:22.845547    4765 logs.go:123] Gathering logs for coredns [3514da73ee37] ...
	I0307 19:49:22.845559    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3514da73ee37"
	I0307 19:49:22.858055    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:49:22.858069    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:49:22.882468    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:49:22.882478    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:49:22.917454    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:49:22.917465    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:49:22.921870    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:49:22.921878    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:49:22.934238    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:49:22.934247    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:49:22.950628    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:49:22.950639    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:49:22.962118    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:49:22.962133    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:49:22.973225    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:49:22.973237    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:49:23.007089    4765 logs.go:123] Gathering logs for coredns [3e74a2cef80b] ...
	I0307 19:49:23.007103    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e74a2cef80b"
	I0307 19:49:23.018746    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:49:23.018757    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:49:23.030096    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:49:23.030105    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:49:23.044587    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:49:23.044597    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:49:25.564425    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:49:30.566514    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:49:30.566954    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:49:30.606066    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:49:30.606185    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:49:30.627524    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:49:30.627613    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:49:30.642656    4765 logs.go:276] 4 containers: [3514da73ee37 3e74a2cef80b 28a42930fc0b cfa4a84ca3c0]
	I0307 19:49:30.642754    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:49:30.655354    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:49:30.655432    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:49:30.666079    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:49:30.666139    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:49:30.676791    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:49:30.676856    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:49:30.687769    4765 logs.go:276] 0 containers: []
	W0307 19:49:30.687781    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:49:30.687832    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:49:30.698070    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:49:30.698085    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:49:30.698098    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:49:30.712200    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:49:30.712211    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:49:30.723876    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:49:30.723889    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:49:30.747159    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:49:30.747165    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:49:30.781856    4765 logs.go:123] Gathering logs for coredns [3514da73ee37] ...
	I0307 19:49:30.781866    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3514da73ee37"
	I0307 19:49:30.797999    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:49:30.798012    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:49:30.812722    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:49:30.812730    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:49:30.824521    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:49:30.824534    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:49:30.836660    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:49:30.836672    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:49:30.856464    4765 logs.go:123] Gathering logs for coredns [3e74a2cef80b] ...
	I0307 19:49:30.856476    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e74a2cef80b"
	I0307 19:49:30.879827    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:49:30.879841    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:49:30.895852    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:49:30.895866    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:49:30.930905    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:49:30.930914    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:49:30.935370    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:49:30.935378    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:49:30.949731    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:49:30.949744    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:49:33.469432    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:49:38.470546    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:49:38.470628    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:49:38.482864    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:49:38.482930    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:49:38.497223    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:49:38.497287    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:49:38.508861    4765 logs.go:276] 4 containers: [3514da73ee37 3e74a2cef80b 28a42930fc0b cfa4a84ca3c0]
	I0307 19:49:38.508923    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:49:38.519710    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:49:38.519768    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:49:38.530934    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:49:38.530991    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:49:38.542892    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:49:38.542962    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:49:38.553405    4765 logs.go:276] 0 containers: []
	W0307 19:49:38.553414    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:49:38.553466    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:49:38.564348    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:49:38.564372    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:49:38.564383    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:49:38.601512    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:49:38.601526    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:49:38.639839    4765 logs.go:123] Gathering logs for coredns [3e74a2cef80b] ...
	I0307 19:49:38.639850    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e74a2cef80b"
	I0307 19:49:38.651020    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:49:38.651029    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:49:38.663744    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:49:38.663758    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:49:38.680756    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:49:38.680772    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:49:38.699337    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:49:38.699348    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:49:38.720457    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:49:38.720475    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:49:38.745861    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:49:38.745873    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:49:38.750354    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:49:38.750367    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:49:38.767382    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:49:38.767400    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:49:38.784246    4765 logs.go:123] Gathering logs for coredns [3514da73ee37] ...
	I0307 19:49:38.784257    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3514da73ee37"
	I0307 19:49:38.796115    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:49:38.796127    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:49:38.812803    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:49:38.812820    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:49:38.826481    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:49:38.826492    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:49:41.340164    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:49:46.342837    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:49:46.343280    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 19:49:46.388880    4765 logs.go:276] 1 containers: [d31d8889ec62]
	I0307 19:49:46.389015    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 19:49:46.411539    4765 logs.go:276] 1 containers: [f5c75341169e]
	I0307 19:49:46.411639    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 19:49:46.425947    4765 logs.go:276] 4 containers: [3514da73ee37 3e74a2cef80b 28a42930fc0b cfa4a84ca3c0]
	I0307 19:49:46.426020    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 19:49:46.437811    4765 logs.go:276] 1 containers: [7a00c691215c]
	I0307 19:49:46.437881    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 19:49:46.448058    4765 logs.go:276] 1 containers: [d551db75555a]
	I0307 19:49:46.448129    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 19:49:46.458323    4765 logs.go:276] 1 containers: [bf19d8f374f7]
	I0307 19:49:46.458387    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 19:49:46.469036    4765 logs.go:276] 0 containers: []
	W0307 19:49:46.469047    4765 logs.go:278] No container was found matching "kindnet"
	I0307 19:49:46.469106    4765 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 19:49:46.480747    4765 logs.go:276] 1 containers: [7e3e563d89b4]
	I0307 19:49:46.480765    4765 logs.go:123] Gathering logs for dmesg ...
	I0307 19:49:46.480770    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:49:46.484991    4765 logs.go:123] Gathering logs for coredns [cfa4a84ca3c0] ...
	I0307 19:49:46.485000    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfa4a84ca3c0"
	I0307 19:49:46.496733    4765 logs.go:123] Gathering logs for storage-provisioner [7e3e563d89b4] ...
	I0307 19:49:46.496745    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3e563d89b4"
	I0307 19:49:46.508468    4765 logs.go:123] Gathering logs for kube-controller-manager [bf19d8f374f7] ...
	I0307 19:49:46.508478    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf19d8f374f7"
	I0307 19:49:46.525850    4765 logs.go:123] Gathering logs for coredns [28a42930fc0b] ...
	I0307 19:49:46.525862    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a42930fc0b"
	I0307 19:49:46.538211    4765 logs.go:123] Gathering logs for kube-scheduler [7a00c691215c] ...
	I0307 19:49:46.538223    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a00c691215c"
	I0307 19:49:46.552677    4765 logs.go:123] Gathering logs for Docker ...
	I0307 19:49:46.552685    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 19:49:46.576931    4765 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:49:46.576940    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 19:49:46.611726    4765 logs.go:123] Gathering logs for kube-apiserver [d31d8889ec62] ...
	I0307 19:49:46.611738    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d31d8889ec62"
	I0307 19:49:46.631597    4765 logs.go:123] Gathering logs for coredns [3514da73ee37] ...
	I0307 19:49:46.631608    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3514da73ee37"
	I0307 19:49:46.643023    4765 logs.go:123] Gathering logs for kube-proxy [d551db75555a] ...
	I0307 19:49:46.643032    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d551db75555a"
	I0307 19:49:46.654453    4765 logs.go:123] Gathering logs for container status ...
	I0307 19:49:46.654464    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 19:49:46.665985    4765 logs.go:123] Gathering logs for kubelet ...
	I0307 19:49:46.665994    4765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:49:46.699698    4765 logs.go:123] Gathering logs for etcd [f5c75341169e] ...
	I0307 19:49:46.699704    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c75341169e"
	I0307 19:49:46.713210    4765 logs.go:123] Gathering logs for coredns [3e74a2cef80b] ...
	I0307 19:49:46.713223    4765 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e74a2cef80b"
	I0307 19:49:49.226163    4765 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 19:49:54.228297    4765 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 19:49:54.234278    4765 out.go:177] 
	W0307 19:49:54.239353    4765 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0307 19:49:54.239386    4765 out.go:239] * 
	* 
	W0307 19:49:54.241881    4765 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:49:54.254290    4765 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-126000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (580.33s)

                                                
                                    
x
+
TestPause/serial/Start (10.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-568000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-568000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.004601125s)

                                                
                                                
-- stdout --
	* [pause-568000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-568000" primary control-plane node in "pause-568000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-568000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-568000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-568000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-568000 -n pause-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-568000 -n pause-568000: exit status 7 (67.1805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-059000 --driver=qemu2 
E0307 19:47:37.679725    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-059000 --driver=qemu2 : exit status 80 (9.843169583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-059000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-059000" primary control-plane node in "NoKubernetes-059000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-059000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-059000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-059000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-059000 -n NoKubernetes-059000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-059000 -n NoKubernetes-059000: exit status 7 (67.808875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-059000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-059000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-059000 --no-kubernetes --driver=qemu2 : exit status 80 (5.854697333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-059000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-059000
	* Restarting existing qemu2 VM for "NoKubernetes-059000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-059000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-059000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-059000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-059000 -n NoKubernetes-059000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-059000 -n NoKubernetes-059000: exit status 7 (68.901084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-059000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-059000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-059000 --no-kubernetes --driver=qemu2 : exit status 80 (6.300963375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-059000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-059000
	* Restarting existing qemu2 VM for "NoKubernetes-059000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-059000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-059000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-059000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-059000 -n NoKubernetes-059000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-059000 -n NoKubernetes-059000: exit status 7 (60.731584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-059000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (6.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-059000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-059000 --driver=qemu2 : exit status 80 (6.3692985s)

                                                
                                                
-- stdout --
	* [NoKubernetes-059000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-059000
	* Restarting existing qemu2 VM for "NoKubernetes-059000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-059000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-059000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-059000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-059000 -n NoKubernetes-059000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-059000 -n NoKubernetes-059000: exit status 7 (62.3955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-059000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (6.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.056186042s)

                                                
                                                
-- stdout --
	* [auto-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-963000" primary control-plane node in "auto-963000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-963000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:48:38.475138    5179 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:48:38.475278    5179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:48:38.475282    5179 out.go:304] Setting ErrFile to fd 2...
	I0307 19:48:38.475284    5179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:48:38.475408    5179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:48:38.476498    5179 out.go:298] Setting JSON to false
	I0307 19:48:38.493179    5179 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4690,"bootTime":1709865028,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:48:38.493248    5179 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:48:38.499842    5179 out.go:177] * [auto-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:48:38.506780    5179 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:48:38.506834    5179 notify.go:220] Checking for updates...
	I0307 19:48:38.514866    5179 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:48:38.517881    5179 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:48:38.520847    5179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:48:38.523891    5179 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:48:38.526771    5179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:48:38.530200    5179 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:48:38.530269    5179 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:48:38.530321    5179 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:48:38.534884    5179 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:48:38.541864    5179 start.go:297] selected driver: qemu2
	I0307 19:48:38.541871    5179 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:48:38.541877    5179 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:48:38.544252    5179 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:48:38.547841    5179 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:48:38.549100    5179 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:48:38.549118    5179 cni.go:84] Creating CNI manager for ""
	I0307 19:48:38.549124    5179 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:48:38.549134    5179 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 19:48:38.549167    5179 start.go:340] cluster config:
	{Name:auto-963000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:48:38.553775    5179 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:48:38.560894    5179 out.go:177] * Starting "auto-963000" primary control-plane node in "auto-963000" cluster
	I0307 19:48:38.564793    5179 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:48:38.564806    5179 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:48:38.564817    5179 cache.go:56] Caching tarball of preloaded images
	I0307 19:48:38.564868    5179 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:48:38.564873    5179 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:48:38.564936    5179 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/auto-963000/config.json ...
	I0307 19:48:38.564948    5179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/auto-963000/config.json: {Name:mk1ca107480a97044550316a979346392602d2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:48:38.565157    5179 start.go:360] acquireMachinesLock for auto-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:48:38.565190    5179 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "auto-963000"
	I0307 19:48:38.565200    5179 start.go:93] Provisioning new machine with config: &{Name:auto-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:48:38.565228    5179 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:48:38.568968    5179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:48:38.585913    5179 start.go:159] libmachine.API.Create for "auto-963000" (driver="qemu2")
	I0307 19:48:38.585936    5179 client.go:168] LocalClient.Create starting
	I0307 19:48:38.586000    5179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:48:38.586030    5179 main.go:141] libmachine: Decoding PEM data...
	I0307 19:48:38.586041    5179 main.go:141] libmachine: Parsing certificate...
	I0307 19:48:38.586086    5179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:48:38.586108    5179 main.go:141] libmachine: Decoding PEM data...
	I0307 19:48:38.586116    5179 main.go:141] libmachine: Parsing certificate...
	I0307 19:48:38.586488    5179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:48:38.725859    5179 main.go:141] libmachine: Creating SSH key...
	I0307 19:48:38.973297    5179 main.go:141] libmachine: Creating Disk image...
	I0307 19:48:38.973311    5179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:48:38.973525    5179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/disk.qcow2
	I0307 19:48:38.986287    5179 main.go:141] libmachine: STDOUT: 
	I0307 19:48:38.986308    5179 main.go:141] libmachine: STDERR: 
	I0307 19:48:38.986366    5179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/disk.qcow2 +20000M
	I0307 19:48:38.997392    5179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:48:38.997411    5179 main.go:141] libmachine: STDERR: 
	I0307 19:48:38.997429    5179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/disk.qcow2
	I0307 19:48:38.997435    5179 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:48:38.997476    5179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:92:be:ad:40:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/disk.qcow2
	I0307 19:48:38.999277    5179 main.go:141] libmachine: STDOUT: 
	I0307 19:48:38.999292    5179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:48:38.999313    5179 client.go:171] duration metric: took 413.388125ms to LocalClient.Create
	I0307 19:48:41.001508    5179 start.go:128] duration metric: took 2.436336875s to createHost
	I0307 19:48:41.001625    5179 start.go:83] releasing machines lock for "auto-963000", held for 2.436523333s
	W0307 19:48:41.001733    5179 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:48:41.012869    5179 out.go:177] * Deleting "auto-963000" in qemu2 ...
	W0307 19:48:41.041157    5179 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:48:41.041215    5179 start.go:728] Will try again in 5 seconds ...
	I0307 19:48:46.043230    5179 start.go:360] acquireMachinesLock for auto-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:48:46.043711    5179 start.go:364] duration metric: took 342.25µs to acquireMachinesLock for "auto-963000"
	I0307 19:48:46.043854    5179 start.go:93] Provisioning new machine with config: &{Name:auto-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:48:46.044169    5179 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:48:46.053881    5179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:48:46.101362    5179 start.go:159] libmachine.API.Create for "auto-963000" (driver="qemu2")
	I0307 19:48:46.101417    5179 client.go:168] LocalClient.Create starting
	I0307 19:48:46.101527    5179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:48:46.101585    5179 main.go:141] libmachine: Decoding PEM data...
	I0307 19:48:46.101603    5179 main.go:141] libmachine: Parsing certificate...
	I0307 19:48:46.101670    5179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:48:46.101712    5179 main.go:141] libmachine: Decoding PEM data...
	I0307 19:48:46.101726    5179 main.go:141] libmachine: Parsing certificate...
	I0307 19:48:46.102276    5179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:48:46.250447    5179 main.go:141] libmachine: Creating SSH key...
	I0307 19:48:46.431511    5179 main.go:141] libmachine: Creating Disk image...
	I0307 19:48:46.431519    5179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:48:46.431721    5179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/disk.qcow2
	I0307 19:48:46.444388    5179 main.go:141] libmachine: STDOUT: 
	I0307 19:48:46.444414    5179 main.go:141] libmachine: STDERR: 
	I0307 19:48:46.444482    5179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/disk.qcow2 +20000M
	I0307 19:48:46.456294    5179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:48:46.456321    5179 main.go:141] libmachine: STDERR: 
	I0307 19:48:46.456336    5179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/disk.qcow2
	I0307 19:48:46.456340    5179 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:48:46.456383    5179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:3d:a9:23:c6:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/auto-963000/disk.qcow2
	I0307 19:48:46.458240    5179 main.go:141] libmachine: STDOUT: 
	I0307 19:48:46.458256    5179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:48:46.458270    5179 client.go:171] duration metric: took 356.861875ms to LocalClient.Create
	I0307 19:48:48.460458    5179 start.go:128] duration metric: took 2.416337791s to createHost
	I0307 19:48:48.460517    5179 start.go:83] releasing machines lock for "auto-963000", held for 2.416885167s
	W0307 19:48:48.460756    5179 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:48:48.472348    5179 out.go:177] 
	W0307 19:48:48.478476    5179 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:48:48.483407    5179 out.go:239] * 
	* 
	W0307 19:48:48.484437    5179 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:48:48.494343    5179 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.692575292s)

                                                
                                                
-- stdout --
	* [calico-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-963000" primary control-plane node in "calico-963000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-963000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:48:50.710463    5295 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:48:50.710614    5295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:48:50.710618    5295 out.go:304] Setting ErrFile to fd 2...
	I0307 19:48:50.710620    5295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:48:50.710756    5295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:48:50.711813    5295 out.go:298] Setting JSON to false
	I0307 19:48:50.727995    5295 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4702,"bootTime":1709865028,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:48:50.728063    5295 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:48:50.734235    5295 out.go:177] * [calico-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:48:50.743114    5295 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:48:50.743156    5295 notify.go:220] Checking for updates...
	I0307 19:48:50.747053    5295 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:48:50.750080    5295 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:48:50.753038    5295 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:48:50.756093    5295 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:48:50.759077    5295 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:48:50.762354    5295 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:48:50.762429    5295 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:48:50.762474    5295 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:48:50.767052    5295 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:48:50.773993    5295 start.go:297] selected driver: qemu2
	I0307 19:48:50.773998    5295 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:48:50.774005    5295 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:48:50.776346    5295 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:48:50.780055    5295 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:48:50.783231    5295 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:48:50.783278    5295 cni.go:84] Creating CNI manager for "calico"
	I0307 19:48:50.783282    5295 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0307 19:48:50.783312    5295 start.go:340] cluster config:
	{Name:calico-963000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:48:50.787747    5295 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:48:50.795072    5295 out.go:177] * Starting "calico-963000" primary control-plane node in "calico-963000" cluster
	I0307 19:48:50.798900    5295 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:48:50.798912    5295 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:48:50.798920    5295 cache.go:56] Caching tarball of preloaded images
	I0307 19:48:50.798966    5295 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:48:50.798971    5295 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:48:50.799041    5295 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/calico-963000/config.json ...
	I0307 19:48:50.799052    5295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/calico-963000/config.json: {Name:mk345d5579966b492d50fd611451d3bd0b8fddb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:48:50.799252    5295 start.go:360] acquireMachinesLock for calico-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:48:50.799283    5295 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "calico-963000"
	I0307 19:48:50.799293    5295 start.go:93] Provisioning new machine with config: &{Name:calico-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:48:50.799323    5295 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:48:50.807091    5295 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:48:50.824011    5295 start.go:159] libmachine.API.Create for "calico-963000" (driver="qemu2")
	I0307 19:48:50.824060    5295 client.go:168] LocalClient.Create starting
	I0307 19:48:50.824118    5295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:48:50.824147    5295 main.go:141] libmachine: Decoding PEM data...
	I0307 19:48:50.824157    5295 main.go:141] libmachine: Parsing certificate...
	I0307 19:48:50.824204    5295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:48:50.824226    5295 main.go:141] libmachine: Decoding PEM data...
	I0307 19:48:50.824232    5295 main.go:141] libmachine: Parsing certificate...
	I0307 19:48:50.824612    5295 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:48:50.962325    5295 main.go:141] libmachine: Creating SSH key...
	I0307 19:48:51.001632    5295 main.go:141] libmachine: Creating Disk image...
	I0307 19:48:51.001641    5295 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:48:51.001808    5295 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/disk.qcow2
	I0307 19:48:51.014086    5295 main.go:141] libmachine: STDOUT: 
	I0307 19:48:51.014106    5295 main.go:141] libmachine: STDERR: 
	I0307 19:48:51.014152    5295 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/disk.qcow2 +20000M
	I0307 19:48:51.024913    5295 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:48:51.024930    5295 main.go:141] libmachine: STDERR: 
	I0307 19:48:51.024956    5295 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/disk.qcow2
	I0307 19:48:51.024962    5295 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:48:51.024988    5295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:6d:22:de:fa:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/disk.qcow2
	I0307 19:48:51.026634    5295 main.go:141] libmachine: STDOUT: 
	I0307 19:48:51.026650    5295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:48:51.026682    5295 client.go:171] duration metric: took 202.613667ms to LocalClient.Create
	I0307 19:48:53.028746    5295 start.go:128] duration metric: took 2.229500833s to createHost
	I0307 19:48:53.028792    5295 start.go:83] releasing machines lock for "calico-963000", held for 2.229593792s
	W0307 19:48:53.028835    5295 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:48:53.036892    5295 out.go:177] * Deleting "calico-963000" in qemu2 ...
	W0307 19:48:53.054585    5295 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:48:53.054606    5295 start.go:728] Will try again in 5 seconds ...
	I0307 19:48:58.056581    5295 start.go:360] acquireMachinesLock for calico-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:48:58.057037    5295 start.go:364] duration metric: took 334.334µs to acquireMachinesLock for "calico-963000"
	I0307 19:48:58.057173    5295 start.go:93] Provisioning new machine with config: &{Name:calico-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:48:58.057411    5295 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:48:58.066928    5295 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:48:58.107138    5295 start.go:159] libmachine.API.Create for "calico-963000" (driver="qemu2")
	I0307 19:48:58.107193    5295 client.go:168] LocalClient.Create starting
	I0307 19:48:58.107312    5295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:48:58.107373    5295 main.go:141] libmachine: Decoding PEM data...
	I0307 19:48:58.107390    5295 main.go:141] libmachine: Parsing certificate...
	I0307 19:48:58.107450    5295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:48:58.107491    5295 main.go:141] libmachine: Decoding PEM data...
	I0307 19:48:58.107500    5295 main.go:141] libmachine: Parsing certificate...
	I0307 19:48:58.108009    5295 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:48:58.252685    5295 main.go:141] libmachine: Creating SSH key...
	I0307 19:48:58.303857    5295 main.go:141] libmachine: Creating Disk image...
	I0307 19:48:58.303862    5295 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:48:58.304029    5295 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/disk.qcow2
	I0307 19:48:58.316248    5295 main.go:141] libmachine: STDOUT: 
	I0307 19:48:58.316273    5295 main.go:141] libmachine: STDERR: 
	I0307 19:48:58.316330    5295 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/disk.qcow2 +20000M
	I0307 19:48:58.327009    5295 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:48:58.327049    5295 main.go:141] libmachine: STDERR: 
	I0307 19:48:58.327067    5295 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/disk.qcow2
	I0307 19:48:58.327073    5295 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:48:58.327105    5295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:ae:ec:aa:4c:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/calico-963000/disk.qcow2
	I0307 19:48:58.328956    5295 main.go:141] libmachine: STDOUT: 
	I0307 19:48:58.328984    5295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:48:58.329006    5295 client.go:171] duration metric: took 221.813334ms to LocalClient.Create
	I0307 19:49:00.331139    5295 start.go:128] duration metric: took 2.27378325s to createHost
	I0307 19:49:00.331210    5295 start.go:83] releasing machines lock for "calico-963000", held for 2.274240375s
	W0307 19:49:00.331592    5295 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:00.345355    5295 out.go:177] 
	W0307 19:49:00.348473    5295 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:49:00.348546    5295 out.go:239] * 
	* 
	W0307 19:49:00.350873    5295 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:49:00.359328    5295 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.888080292s)

                                                
                                                
-- stdout --
	* [custom-flannel-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-963000" primary control-plane node in "custom-flannel-963000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-963000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:49:02.808874    5421 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:49:02.809016    5421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:49:02.809020    5421 out.go:304] Setting ErrFile to fd 2...
	I0307 19:49:02.809022    5421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:49:02.809139    5421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:49:02.810229    5421 out.go:298] Setting JSON to false
	I0307 19:49:02.826799    5421 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4714,"bootTime":1709865028,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:49:02.826896    5421 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:49:02.832194    5421 out.go:177] * [custom-flannel-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:49:02.839144    5421 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:49:02.842988    5421 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:49:02.839206    5421 notify.go:220] Checking for updates...
	I0307 19:49:02.850118    5421 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:49:02.857044    5421 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:49:02.860111    5421 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:49:02.867065    5421 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:49:02.870444    5421 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:49:02.870508    5421 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:49:02.870551    5421 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:49:02.873943    5421 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:49:02.881112    5421 start.go:297] selected driver: qemu2
	I0307 19:49:02.881117    5421 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:49:02.881123    5421 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:49:02.883246    5421 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:49:02.887063    5421 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:49:02.890155    5421 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:49:02.890192    5421 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0307 19:49:02.890204    5421 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0307 19:49:02.890234    5421 start.go:340] cluster config:
	{Name:custom-flannel-963000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:49:02.894319    5421 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:49:02.901010    5421 out.go:177] * Starting "custom-flannel-963000" primary control-plane node in "custom-flannel-963000" cluster
	I0307 19:49:02.905148    5421 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:49:02.905165    5421 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:49:02.905173    5421 cache.go:56] Caching tarball of preloaded images
	I0307 19:49:02.905226    5421 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:49:02.905230    5421 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:49:02.905289    5421 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/custom-flannel-963000/config.json ...
	I0307 19:49:02.905299    5421 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/custom-flannel-963000/config.json: {Name:mkbc22788c62549b50d14d7195f5738d9b8e6e33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:49:02.905470    5421 start.go:360] acquireMachinesLock for custom-flannel-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:49:02.905499    5421 start.go:364] duration metric: took 23.042µs to acquireMachinesLock for "custom-flannel-963000"
	I0307 19:49:02.905508    5421 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:49:02.905537    5421 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:49:02.914076    5421 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:49:02.928921    5421 start.go:159] libmachine.API.Create for "custom-flannel-963000" (driver="qemu2")
	I0307 19:49:02.928944    5421 client.go:168] LocalClient.Create starting
	I0307 19:49:02.929017    5421 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:49:02.929047    5421 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:02.929055    5421 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:02.929095    5421 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:49:02.929115    5421 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:02.929123    5421 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:02.929429    5421 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:49:03.063770    5421 main.go:141] libmachine: Creating SSH key...
	I0307 19:49:03.216759    5421 main.go:141] libmachine: Creating Disk image...
	I0307 19:49:03.216769    5421 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:49:03.216971    5421 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/disk.qcow2
	I0307 19:49:03.229620    5421 main.go:141] libmachine: STDOUT: 
	I0307 19:49:03.229642    5421 main.go:141] libmachine: STDERR: 
	I0307 19:49:03.229710    5421 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/disk.qcow2 +20000M
	I0307 19:49:03.242209    5421 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:49:03.242229    5421 main.go:141] libmachine: STDERR: 
	I0307 19:49:03.242249    5421 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/disk.qcow2
	I0307 19:49:03.242254    5421 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:49:03.242285    5421 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:80:4e:32:a9:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/disk.qcow2
	I0307 19:49:03.244237    5421 main.go:141] libmachine: STDOUT: 
	I0307 19:49:03.244265    5421 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:49:03.244294    5421 client.go:171] duration metric: took 315.355833ms to LocalClient.Create
	I0307 19:49:05.245029    5421 start.go:128] duration metric: took 2.339558458s to createHost
	I0307 19:49:05.245148    5421 start.go:83] releasing machines lock for "custom-flannel-963000", held for 2.339735209s
	W0307 19:49:05.245247    5421 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:05.258419    5421 out.go:177] * Deleting "custom-flannel-963000" in qemu2 ...
	W0307 19:49:05.286099    5421 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:05.286139    5421 start.go:728] Will try again in 5 seconds ...
	I0307 19:49:10.288153    5421 start.go:360] acquireMachinesLock for custom-flannel-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:49:10.288742    5421 start.go:364] duration metric: took 427.792µs to acquireMachinesLock for "custom-flannel-963000"
	I0307 19:49:10.288812    5421 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:49:10.289117    5421 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:49:10.298929    5421 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:49:10.345808    5421 start.go:159] libmachine.API.Create for "custom-flannel-963000" (driver="qemu2")
	I0307 19:49:10.345879    5421 client.go:168] LocalClient.Create starting
	I0307 19:49:10.346002    5421 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:49:10.346067    5421 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:10.346112    5421 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:10.346179    5421 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:49:10.346220    5421 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:10.346231    5421 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:10.346751    5421 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:49:10.496601    5421 main.go:141] libmachine: Creating SSH key...
	I0307 19:49:10.596488    5421 main.go:141] libmachine: Creating Disk image...
	I0307 19:49:10.596495    5421 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:49:10.596662    5421 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/disk.qcow2
	I0307 19:49:10.609279    5421 main.go:141] libmachine: STDOUT: 
	I0307 19:49:10.609377    5421 main.go:141] libmachine: STDERR: 
	I0307 19:49:10.609448    5421 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/disk.qcow2 +20000M
	I0307 19:49:10.620572    5421 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:49:10.620651    5421 main.go:141] libmachine: STDERR: 
	I0307 19:49:10.620664    5421 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/disk.qcow2
	I0307 19:49:10.620669    5421 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:49:10.620704    5421 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:18:dd:05:71:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/custom-flannel-963000/disk.qcow2
	I0307 19:49:10.622525    5421 main.go:141] libmachine: STDOUT: 
	I0307 19:49:10.622771    5421 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:49:10.622785    5421 client.go:171] duration metric: took 276.912209ms to LocalClient.Create
	I0307 19:49:12.624930    5421 start.go:128] duration metric: took 2.335867875s to createHost
	I0307 19:49:12.625052    5421 start.go:83] releasing machines lock for "custom-flannel-963000", held for 2.336363708s
	W0307 19:49:12.625392    5421 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:12.634088    5421 out.go:177] 
	W0307 19:49:12.639028    5421 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:49:12.639062    5421 out.go:239] * 
	* 
	W0307 19:49:12.641593    5421 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:49:12.650935    5421 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.875662041s)

                                                
                                                
-- stdout --
	* [false-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-963000" primary control-plane node in "false-963000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-963000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:49:15.157804    5547 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:49:15.157932    5547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:49:15.157937    5547 out.go:304] Setting ErrFile to fd 2...
	I0307 19:49:15.157939    5547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:49:15.158059    5547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:49:15.159266    5547 out.go:298] Setting JSON to false
	I0307 19:49:15.177661    5547 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4727,"bootTime":1709865028,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:49:15.177750    5547 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:49:15.182941    5547 out.go:177] * [false-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:49:15.191071    5547 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:49:15.194041    5547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:49:15.191117    5547 notify.go:220] Checking for updates...
	I0307 19:49:15.200995    5547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:49:15.207952    5547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:49:15.211037    5547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:49:15.214015    5547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:49:15.218279    5547 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:49:15.218345    5547 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:49:15.218393    5547 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:49:15.223022    5547 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:49:15.229067    5547 start.go:297] selected driver: qemu2
	I0307 19:49:15.229073    5547 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:49:15.229080    5547 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:49:15.231319    5547 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:49:15.234835    5547 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:49:15.238069    5547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:49:15.238122    5547 cni.go:84] Creating CNI manager for "false"
	I0307 19:49:15.238154    5547 start.go:340] cluster config:
	{Name:false-963000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:49:15.242476    5547 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:49:15.250018    5547 out.go:177] * Starting "false-963000" primary control-plane node in "false-963000" cluster
	I0307 19:49:15.253943    5547 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:49:15.253955    5547 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:49:15.253963    5547 cache.go:56] Caching tarball of preloaded images
	I0307 19:49:15.254012    5547 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:49:15.254017    5547 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:49:15.254078    5547 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/false-963000/config.json ...
	I0307 19:49:15.254088    5547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/false-963000/config.json: {Name:mkbb917879e4674d5e3048d148621ef3cbd95323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:49:15.254280    5547 start.go:360] acquireMachinesLock for false-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:49:15.254309    5547 start.go:364] duration metric: took 23.583µs to acquireMachinesLock for "false-963000"
	I0307 19:49:15.254318    5547 start.go:93] Provisioning new machine with config: &{Name:false-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:49:15.254352    5547 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:49:15.262948    5547 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:49:15.277948    5547 start.go:159] libmachine.API.Create for "false-963000" (driver="qemu2")
	I0307 19:49:15.277979    5547 client.go:168] LocalClient.Create starting
	I0307 19:49:15.278039    5547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:49:15.278070    5547 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:15.278081    5547 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:15.278130    5547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:49:15.278151    5547 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:15.278157    5547 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:15.278497    5547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:49:15.416705    5547 main.go:141] libmachine: Creating SSH key...
	I0307 19:49:15.504482    5547 main.go:141] libmachine: Creating Disk image...
	I0307 19:49:15.504492    5547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:49:15.504665    5547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/disk.qcow2
	I0307 19:49:15.517356    5547 main.go:141] libmachine: STDOUT: 
	I0307 19:49:15.517378    5547 main.go:141] libmachine: STDERR: 
	I0307 19:49:15.517433    5547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/disk.qcow2 +20000M
	I0307 19:49:15.528579    5547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:49:15.528598    5547 main.go:141] libmachine: STDERR: 
	I0307 19:49:15.528610    5547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/disk.qcow2
	I0307 19:49:15.528616    5547 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:49:15.528664    5547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:2a:ae:1d:1b:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/disk.qcow2
	I0307 19:49:15.530465    5547 main.go:141] libmachine: STDOUT: 
	I0307 19:49:15.530486    5547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:49:15.530505    5547 client.go:171] duration metric: took 252.530333ms to LocalClient.Create
	I0307 19:49:17.532705    5547 start.go:128] duration metric: took 2.2784135s to createHost
	I0307 19:49:17.532819    5547 start.go:83] releasing machines lock for "false-963000", held for 2.278594291s
	W0307 19:49:17.532882    5547 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:17.546155    5547 out.go:177] * Deleting "false-963000" in qemu2 ...
	W0307 19:49:17.573365    5547 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:17.573411    5547 start.go:728] Will try again in 5 seconds ...
	I0307 19:49:22.575386    5547 start.go:360] acquireMachinesLock for false-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:49:22.575804    5547 start.go:364] duration metric: took 287.458µs to acquireMachinesLock for "false-963000"
	I0307 19:49:22.575929    5547 start.go:93] Provisioning new machine with config: &{Name:false-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:49:22.576155    5547 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:49:22.585544    5547 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:49:22.632736    5547 start.go:159] libmachine.API.Create for "false-963000" (driver="qemu2")
	I0307 19:49:22.632789    5547 client.go:168] LocalClient.Create starting
	I0307 19:49:22.632895    5547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:49:22.632959    5547 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:22.632974    5547 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:22.633041    5547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:49:22.633082    5547 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:22.633099    5547 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:22.633662    5547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:49:22.783294    5547 main.go:141] libmachine: Creating SSH key...
	I0307 19:49:22.919277    5547 main.go:141] libmachine: Creating Disk image...
	I0307 19:49:22.919289    5547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:49:22.919516    5547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/disk.qcow2
	I0307 19:49:22.932884    5547 main.go:141] libmachine: STDOUT: 
	I0307 19:49:22.932910    5547 main.go:141] libmachine: STDERR: 
	I0307 19:49:22.932985    5547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/disk.qcow2 +20000M
	I0307 19:49:22.945170    5547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:49:22.945207    5547 main.go:141] libmachine: STDERR: 
	I0307 19:49:22.945224    5547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/disk.qcow2
	I0307 19:49:22.945230    5547 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:49:22.945274    5547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:ba:03:79:8b:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/false-963000/disk.qcow2
	I0307 19:49:22.947278    5547 main.go:141] libmachine: STDOUT: 
	I0307 19:49:22.947309    5547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:49:22.947322    5547 client.go:171] duration metric: took 314.54025ms to LocalClient.Create
	I0307 19:49:24.949453    5547 start.go:128] duration metric: took 2.373361208s to createHost
	I0307 19:49:24.949518    5547 start.go:83] releasing machines lock for "false-963000", held for 2.373789125s
	W0307 19:49:24.949912    5547 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:24.965674    5547 out.go:177] 
	W0307 19:49:24.972684    5547 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:49:24.972715    5547 out.go:239] * 
	* 
	W0307 19:49:24.975689    5547 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:49:24.987537    5547 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.82705425s)

                                                
                                                
-- stdout --
	* [kindnet-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-963000" primary control-plane node in "kindnet-963000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-963000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:49:27.261583    5664 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:49:27.261720    5664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:49:27.261723    5664 out.go:304] Setting ErrFile to fd 2...
	I0307 19:49:27.261725    5664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:49:27.261847    5664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:49:27.262871    5664 out.go:298] Setting JSON to false
	I0307 19:49:27.279216    5664 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4739,"bootTime":1709865028,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:49:27.279291    5664 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:49:27.285593    5664 out.go:177] * [kindnet-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:49:27.293518    5664 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:49:27.296555    5664 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:49:27.293583    5664 notify.go:220] Checking for updates...
	I0307 19:49:27.297954    5664 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:49:27.301532    5664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:49:27.304542    5664 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:49:27.307595    5664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:49:27.310906    5664 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:49:27.310978    5664 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:49:27.311027    5664 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:49:27.315468    5664 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:49:27.322518    5664 start.go:297] selected driver: qemu2
	I0307 19:49:27.322522    5664 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:49:27.322527    5664 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:49:27.324764    5664 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:49:27.327503    5664 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:49:27.330622    5664 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:49:27.330660    5664 cni.go:84] Creating CNI manager for "kindnet"
	I0307 19:49:27.330664    5664 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 19:49:27.330702    5664 start.go:340] cluster config:
	{Name:kindnet-963000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:49:27.334976    5664 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:49:27.342573    5664 out.go:177] * Starting "kindnet-963000" primary control-plane node in "kindnet-963000" cluster
	I0307 19:49:27.345517    5664 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:49:27.345531    5664 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:49:27.345543    5664 cache.go:56] Caching tarball of preloaded images
	I0307 19:49:27.345608    5664 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:49:27.345613    5664 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:49:27.345675    5664 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/kindnet-963000/config.json ...
	I0307 19:49:27.345685    5664 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/kindnet-963000/config.json: {Name:mkbb63f5fb3b97740636161cbf9f8c0c929d8f84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:49:27.345896    5664 start.go:360] acquireMachinesLock for kindnet-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:49:27.345926    5664 start.go:364] duration metric: took 23.708µs to acquireMachinesLock for "kindnet-963000"
	I0307 19:49:27.345935    5664 start.go:93] Provisioning new machine with config: &{Name:kindnet-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:49:27.345966    5664 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:49:27.353484    5664 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:49:27.369392    5664 start.go:159] libmachine.API.Create for "kindnet-963000" (driver="qemu2")
	I0307 19:49:27.369424    5664 client.go:168] LocalClient.Create starting
	I0307 19:49:27.369488    5664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:49:27.369516    5664 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:27.369532    5664 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:27.369575    5664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:49:27.369597    5664 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:27.369608    5664 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:27.369935    5664 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:49:27.506608    5664 main.go:141] libmachine: Creating SSH key...
	I0307 19:49:27.565923    5664 main.go:141] libmachine: Creating Disk image...
	I0307 19:49:27.565931    5664 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:49:27.566116    5664 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/disk.qcow2
	I0307 19:49:27.578563    5664 main.go:141] libmachine: STDOUT: 
	I0307 19:49:27.578586    5664 main.go:141] libmachine: STDERR: 
	I0307 19:49:27.578648    5664 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/disk.qcow2 +20000M
	I0307 19:49:27.589736    5664 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:49:27.589749    5664 main.go:141] libmachine: STDERR: 
	I0307 19:49:27.589776    5664 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/disk.qcow2
	I0307 19:49:27.589779    5664 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:49:27.589820    5664 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:b6:80:3b:ce:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/disk.qcow2
	I0307 19:49:27.591484    5664 main.go:141] libmachine: STDOUT: 
	I0307 19:49:27.591498    5664 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:49:27.591517    5664 client.go:171] duration metric: took 222.095584ms to LocalClient.Create
	I0307 19:49:29.593877    5664 start.go:128] duration metric: took 2.247958708s to createHost
	I0307 19:49:29.593968    5664 start.go:83] releasing machines lock for "kindnet-963000", held for 2.248125125s
	W0307 19:49:29.594063    5664 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:29.609187    5664 out.go:177] * Deleting "kindnet-963000" in qemu2 ...
	W0307 19:49:29.633957    5664 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:29.634006    5664 start.go:728] Will try again in 5 seconds ...
	I0307 19:49:34.634037    5664 start.go:360] acquireMachinesLock for kindnet-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:49:34.634474    5664 start.go:364] duration metric: took 330.208µs to acquireMachinesLock for "kindnet-963000"
	I0307 19:49:34.634583    5664 start.go:93] Provisioning new machine with config: &{Name:kindnet-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:49:34.634927    5664 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:49:34.644399    5664 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:49:34.689560    5664 start.go:159] libmachine.API.Create for "kindnet-963000" (driver="qemu2")
	I0307 19:49:34.689608    5664 client.go:168] LocalClient.Create starting
	I0307 19:49:34.689722    5664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:49:34.689784    5664 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:34.689803    5664 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:34.689862    5664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:49:34.689906    5664 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:34.689920    5664 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:34.690468    5664 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:49:34.837609    5664 main.go:141] libmachine: Creating SSH key...
	I0307 19:49:34.995030    5664 main.go:141] libmachine: Creating Disk image...
	I0307 19:49:34.995043    5664 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:49:34.995252    5664 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/disk.qcow2
	I0307 19:49:35.007886    5664 main.go:141] libmachine: STDOUT: 
	I0307 19:49:35.007906    5664 main.go:141] libmachine: STDERR: 
	I0307 19:49:35.007965    5664 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/disk.qcow2 +20000M
	I0307 19:49:35.018851    5664 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:49:35.018868    5664 main.go:141] libmachine: STDERR: 
	I0307 19:49:35.018885    5664 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/disk.qcow2
	I0307 19:49:35.018888    5664 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:49:35.018919    5664 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:00:37:69:97:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kindnet-963000/disk.qcow2
	I0307 19:49:35.020678    5664 main.go:141] libmachine: STDOUT: 
	I0307 19:49:35.020692    5664 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:49:35.020705    5664 client.go:171] duration metric: took 331.104583ms to LocalClient.Create
	I0307 19:49:37.022746    5664 start.go:128] duration metric: took 2.387894375s to createHost
	I0307 19:49:37.022777    5664 start.go:83] releasing machines lock for "kindnet-963000", held for 2.388382459s
	W0307 19:49:37.022935    5664 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:37.034197    5664 out.go:177] 
	W0307 19:49:37.038273    5664 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:49:37.038281    5664 out.go:239] * 
	* 
	W0307 19:49:37.039016    5664 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:49:37.049214    5664 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.720048167s)

                                                
                                                
-- stdout --
	* [flannel-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-963000" primary control-plane node in "flannel-963000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-963000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:49:39.424508    5791 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:49:39.424642    5791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:49:39.424646    5791 out.go:304] Setting ErrFile to fd 2...
	I0307 19:49:39.424648    5791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:49:39.424782    5791 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:49:39.425848    5791 out.go:298] Setting JSON to false
	I0307 19:49:39.442234    5791 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4751,"bootTime":1709865028,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:49:39.442293    5791 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:49:39.448907    5791 out.go:177] * [flannel-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:49:39.456671    5791 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:49:39.456705    5791 notify.go:220] Checking for updates...
	I0307 19:49:39.462652    5791 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:49:39.465682    5791 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:49:39.467166    5791 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:49:39.470654    5791 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:49:39.473676    5791 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:49:39.477029    5791 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:49:39.477088    5791 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:49:39.477133    5791 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:49:39.481642    5791 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:49:39.488703    5791 start.go:297] selected driver: qemu2
	I0307 19:49:39.488708    5791 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:49:39.488716    5791 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:49:39.490763    5791 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:49:39.493624    5791 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:49:39.496744    5791 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:49:39.496802    5791 cni.go:84] Creating CNI manager for "flannel"
	I0307 19:49:39.496810    5791 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0307 19:49:39.496835    5791 start.go:340] cluster config:
	{Name:flannel-963000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:49:39.501164    5791 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:49:39.508591    5791 out.go:177] * Starting "flannel-963000" primary control-plane node in "flannel-963000" cluster
	I0307 19:49:39.512693    5791 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:49:39.512721    5791 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:49:39.512739    5791 cache.go:56] Caching tarball of preloaded images
	I0307 19:49:39.512795    5791 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:49:39.512800    5791 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:49:39.512859    5791 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/flannel-963000/config.json ...
	I0307 19:49:39.512869    5791 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/flannel-963000/config.json: {Name:mk6a6875947dd7096101e3202dd2b88564f2ece4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:49:39.513187    5791 start.go:360] acquireMachinesLock for flannel-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:49:39.513214    5791 start.go:364] duration metric: took 22.084µs to acquireMachinesLock for "flannel-963000"
	I0307 19:49:39.513223    5791 start.go:93] Provisioning new machine with config: &{Name:flannel-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:49:39.513262    5791 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:49:39.517674    5791 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:49:39.532202    5791 start.go:159] libmachine.API.Create for "flannel-963000" (driver="qemu2")
	I0307 19:49:39.532228    5791 client.go:168] LocalClient.Create starting
	I0307 19:49:39.532282    5791 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:49:39.532311    5791 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:39.532323    5791 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:39.532366    5791 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:49:39.532386    5791 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:39.532393    5791 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:39.532736    5791 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:49:39.670823    5791 main.go:141] libmachine: Creating SSH key...
	I0307 19:49:39.715477    5791 main.go:141] libmachine: Creating Disk image...
	I0307 19:49:39.715483    5791 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:49:39.715661    5791 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/disk.qcow2
	I0307 19:49:39.727960    5791 main.go:141] libmachine: STDOUT: 
	I0307 19:49:39.727978    5791 main.go:141] libmachine: STDERR: 
	I0307 19:49:39.728029    5791 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/disk.qcow2 +20000M
	I0307 19:49:39.738456    5791 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:49:39.738472    5791 main.go:141] libmachine: STDERR: 
	I0307 19:49:39.738485    5791 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/disk.qcow2
	I0307 19:49:39.738491    5791 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:49:39.738522    5791 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:c9:f8:2c:42:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/disk.qcow2
	I0307 19:49:39.740184    5791 main.go:141] libmachine: STDOUT: 
	I0307 19:49:39.740202    5791 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:49:39.740221    5791 client.go:171] duration metric: took 207.996209ms to LocalClient.Create
	I0307 19:49:41.742295    5791 start.go:128] duration metric: took 2.229106125s to createHost
	I0307 19:49:41.742348    5791 start.go:83] releasing machines lock for "flannel-963000", held for 2.22920575s
	W0307 19:49:41.742376    5791 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:41.747286    5791 out.go:177] * Deleting "flannel-963000" in qemu2 ...
	W0307 19:49:41.765733    5791 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:41.765741    5791 start.go:728] Will try again in 5 seconds ...
	I0307 19:49:46.767632    5791 start.go:360] acquireMachinesLock for flannel-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:49:46.767810    5791 start.go:364] duration metric: took 128.459µs to acquireMachinesLock for "flannel-963000"
	I0307 19:49:46.767824    5791 start.go:93] Provisioning new machine with config: &{Name:flannel-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:49:46.767863    5791 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:49:46.773131    5791 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:49:46.787704    5791 start.go:159] libmachine.API.Create for "flannel-963000" (driver="qemu2")
	I0307 19:49:46.787730    5791 client.go:168] LocalClient.Create starting
	I0307 19:49:46.787792    5791 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:49:46.787820    5791 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:46.787828    5791 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:46.787869    5791 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:49:46.787893    5791 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:46.787898    5791 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:46.788178    5791 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:49:46.928361    5791 main.go:141] libmachine: Creating SSH key...
	I0307 19:49:47.050053    5791 main.go:141] libmachine: Creating Disk image...
	I0307 19:49:47.050062    5791 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:49:47.050264    5791 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/disk.qcow2
	I0307 19:49:47.062956    5791 main.go:141] libmachine: STDOUT: 
	I0307 19:49:47.062983    5791 main.go:141] libmachine: STDERR: 
	I0307 19:49:47.063055    5791 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/disk.qcow2 +20000M
	I0307 19:49:47.074069    5791 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:49:47.074089    5791 main.go:141] libmachine: STDERR: 
	I0307 19:49:47.074100    5791 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/disk.qcow2
	I0307 19:49:47.074106    5791 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:49:47.074141    5791 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:3a:00:a3:f2:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/flannel-963000/disk.qcow2
	I0307 19:49:47.075904    5791 main.go:141] libmachine: STDOUT: 
	I0307 19:49:47.075920    5791 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:49:47.075939    5791 client.go:171] duration metric: took 288.211583ms to LocalClient.Create
	I0307 19:49:49.077923    5791 start.go:128] duration metric: took 2.310148084s to createHost
	I0307 19:49:49.077948    5791 start.go:83] releasing machines lock for "flannel-963000", held for 2.31022775s
	W0307 19:49:49.078021    5791 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:49.089305    5791 out.go:177] 
	W0307 19:49:49.093250    5791 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:49:49.093255    5791 out.go:239] * 
	* 
	W0307 19:49:49.098906    5791 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:49:49.109350    5791 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.803291417s)

                                                
                                                
-- stdout --
	* [enable-default-cni-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-963000" primary control-plane node in "enable-default-cni-963000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-963000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:49:51.550954    5912 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:49:51.551238    5912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:49:51.551245    5912 out.go:304] Setting ErrFile to fd 2...
	I0307 19:49:51.551247    5912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:49:51.551379    5912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:49:51.552795    5912 out.go:298] Setting JSON to false
	I0307 19:49:51.569810    5912 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4763,"bootTime":1709865028,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:49:51.569954    5912 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:49:51.575526    5912 out.go:177] * [enable-default-cni-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:49:51.583675    5912 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:49:51.583720    5912 notify.go:220] Checking for updates...
	I0307 19:49:51.589633    5912 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:49:51.592686    5912 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:49:51.595657    5912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:49:51.598640    5912 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:49:51.601635    5912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:49:51.603436    5912 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:49:51.603511    5912 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:49:51.603585    5912 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:49:51.607579    5912 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:49:51.614518    5912 start.go:297] selected driver: qemu2
	I0307 19:49:51.614525    5912 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:49:51.614532    5912 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:49:51.616866    5912 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:49:51.619605    5912 out.go:177] * Automatically selected the socket_vmnet network
	E0307 19:49:51.622737    5912 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0307 19:49:51.622748    5912 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:49:51.622777    5912 cni.go:84] Creating CNI manager for "bridge"
	I0307 19:49:51.622782    5912 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 19:49:51.622811    5912 start.go:340] cluster config:
	{Name:enable-default-cni-963000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:49:51.627382    5912 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:49:51.634611    5912 out.go:177] * Starting "enable-default-cni-963000" primary control-plane node in "enable-default-cni-963000" cluster
	I0307 19:49:51.638698    5912 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:49:51.638714    5912 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:49:51.638739    5912 cache.go:56] Caching tarball of preloaded images
	I0307 19:49:51.638806    5912 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:49:51.638812    5912 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:49:51.638874    5912 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/enable-default-cni-963000/config.json ...
	I0307 19:49:51.638888    5912 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/enable-default-cni-963000/config.json: {Name:mk1243e0f2f9e7286d15325df9143087868c6b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:49:51.639091    5912 start.go:360] acquireMachinesLock for enable-default-cni-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:49:51.639123    5912 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "enable-default-cni-963000"
	I0307 19:49:51.639134    5912 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:49:51.639166    5912 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:49:51.646669    5912 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:49:51.663627    5912 start.go:159] libmachine.API.Create for "enable-default-cni-963000" (driver="qemu2")
	I0307 19:49:51.663650    5912 client.go:168] LocalClient.Create starting
	I0307 19:49:51.663704    5912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:49:51.663737    5912 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:51.663746    5912 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:51.663787    5912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:49:51.663808    5912 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:51.663822    5912 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:51.664202    5912 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:49:51.803235    5912 main.go:141] libmachine: Creating SSH key...
	I0307 19:49:51.872630    5912 main.go:141] libmachine: Creating Disk image...
	I0307 19:49:51.872636    5912 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:49:51.872820    5912 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/disk.qcow2
	I0307 19:49:51.885488    5912 main.go:141] libmachine: STDOUT: 
	I0307 19:49:51.885517    5912 main.go:141] libmachine: STDERR: 
	I0307 19:49:51.885585    5912 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/disk.qcow2 +20000M
	I0307 19:49:51.896408    5912 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:49:51.896428    5912 main.go:141] libmachine: STDERR: 
	I0307 19:49:51.896454    5912 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/disk.qcow2
	I0307 19:49:51.896461    5912 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:49:51.896502    5912 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:8a:9e:60:60:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/disk.qcow2
	I0307 19:49:51.898237    5912 main.go:141] libmachine: STDOUT: 
	I0307 19:49:51.898254    5912 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:49:51.898276    5912 client.go:171] duration metric: took 234.630125ms to LocalClient.Create
	I0307 19:49:53.900413    5912 start.go:128] duration metric: took 2.261311542s to createHost
	I0307 19:49:53.900488    5912 start.go:83] releasing machines lock for "enable-default-cni-963000", held for 2.261446708s
	W0307 19:49:53.900618    5912 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:53.914821    5912 out.go:177] * Deleting "enable-default-cni-963000" in qemu2 ...
	W0307 19:49:53.940281    5912 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:49:53.940322    5912 start.go:728] Will try again in 5 seconds ...
	I0307 19:49:58.942411    5912 start.go:360] acquireMachinesLock for enable-default-cni-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:49:58.943010    5912 start.go:364] duration metric: took 457.208µs to acquireMachinesLock for "enable-default-cni-963000"
	I0307 19:49:58.943158    5912 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:49:58.943419    5912 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:49:58.954020    5912 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:49:59.002554    5912 start.go:159] libmachine.API.Create for "enable-default-cni-963000" (driver="qemu2")
	I0307 19:49:59.002609    5912 client.go:168] LocalClient.Create starting
	I0307 19:49:59.002747    5912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:49:59.002818    5912 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:59.002837    5912 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:59.002903    5912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:49:59.002945    5912 main.go:141] libmachine: Decoding PEM data...
	I0307 19:49:59.002962    5912 main.go:141] libmachine: Parsing certificate...
	I0307 19:49:59.003534    5912 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:49:59.167971    5912 main.go:141] libmachine: Creating SSH key...
	I0307 19:49:59.251819    5912 main.go:141] libmachine: Creating Disk image...
	I0307 19:49:59.251829    5912 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:49:59.252036    5912 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/disk.qcow2
	I0307 19:49:59.264959    5912 main.go:141] libmachine: STDOUT: 
	I0307 19:49:59.264996    5912 main.go:141] libmachine: STDERR: 
	I0307 19:49:59.265070    5912 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/disk.qcow2 +20000M
	I0307 19:49:59.276026    5912 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:49:59.276045    5912 main.go:141] libmachine: STDERR: 
	I0307 19:49:59.276058    5912 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/disk.qcow2
	I0307 19:49:59.276061    5912 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:49:59.276097    5912 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:11:6f:b2:5b:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/enable-default-cni-963000/disk.qcow2
	I0307 19:49:59.277941    5912 main.go:141] libmachine: STDOUT: 
	I0307 19:49:59.277967    5912 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:49:59.277985    5912 client.go:171] duration metric: took 275.379833ms to LocalClient.Create
	I0307 19:50:01.280164    5912 start.go:128] duration metric: took 2.336808791s to createHost
	I0307 19:50:01.280225    5912 start.go:83] releasing machines lock for "enable-default-cni-963000", held for 2.337286959s
	W0307 19:50:01.280551    5912 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:01.295404    5912 out.go:177] 
	W0307 19:50:01.300424    5912 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:50:01.300455    5912 out.go:239] * 
	* 
	W0307 19:50:01.303254    5912 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:50:01.313376    5912 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.106172875s)

                                                
                                                
-- stdout --
	* [bridge-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-963000" primary control-plane node in "bridge-963000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-963000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:50:03.610726    6032 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:50:03.610841    6032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:03.610844    6032 out.go:304] Setting ErrFile to fd 2...
	I0307 19:50:03.610846    6032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:03.610979    6032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:50:03.612108    6032 out.go:298] Setting JSON to false
	I0307 19:50:03.628446    6032 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4775,"bootTime":1709865028,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:50:03.628507    6032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:50:03.634914    6032 out.go:177] * [bridge-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:50:03.642962    6032 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:50:03.643012    6032 notify.go:220] Checking for updates...
	I0307 19:50:03.647879    6032 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:50:03.650779    6032 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:50:03.653855    6032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:50:03.656862    6032 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:50:03.658251    6032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:50:03.661223    6032 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:50:03.661291    6032 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:50:03.661346    6032 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:50:03.665849    6032 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:50:03.670849    6032 start.go:297] selected driver: qemu2
	I0307 19:50:03.670856    6032 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:50:03.670863    6032 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:50:03.673063    6032 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:50:03.676876    6032 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:50:03.679924    6032 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:50:03.679980    6032 cni.go:84] Creating CNI manager for "bridge"
	I0307 19:50:03.679984    6032 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 19:50:03.680023    6032 start.go:340] cluster config:
	{Name:bridge-963000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:50:03.684280    6032 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:03.691859    6032 out.go:177] * Starting "bridge-963000" primary control-plane node in "bridge-963000" cluster
	I0307 19:50:03.695884    6032 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:50:03.695899    6032 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:50:03.695911    6032 cache.go:56] Caching tarball of preloaded images
	I0307 19:50:03.695970    6032 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:50:03.695975    6032 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:50:03.696048    6032 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/bridge-963000/config.json ...
	I0307 19:50:03.696059    6032 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/bridge-963000/config.json: {Name:mk3751498fecbb61d634cadd550745285402cfde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:50:03.696269    6032 start.go:360] acquireMachinesLock for bridge-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:03.696298    6032 start.go:364] duration metric: took 23.458µs to acquireMachinesLock for "bridge-963000"
	I0307 19:50:03.696308    6032 start.go:93] Provisioning new machine with config: &{Name:bridge-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:50:03.696344    6032 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:50:03.704864    6032 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:50:03.720105    6032 start.go:159] libmachine.API.Create for "bridge-963000" (driver="qemu2")
	I0307 19:50:03.720131    6032 client.go:168] LocalClient.Create starting
	I0307 19:50:03.720193    6032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:50:03.720223    6032 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:03.720233    6032 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:03.720274    6032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:50:03.720297    6032 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:03.720303    6032 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:03.720637    6032 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:50:03.858324    6032 main.go:141] libmachine: Creating SSH key...
	I0307 19:50:04.011544    6032 main.go:141] libmachine: Creating Disk image...
	I0307 19:50:04.011553    6032 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:50:04.011757    6032 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/disk.qcow2
	I0307 19:50:04.024251    6032 main.go:141] libmachine: STDOUT: 
	I0307 19:50:04.024270    6032 main.go:141] libmachine: STDERR: 
	I0307 19:50:04.024326    6032 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/disk.qcow2 +20000M
	I0307 19:50:04.035141    6032 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:50:04.035158    6032 main.go:141] libmachine: STDERR: 
	I0307 19:50:04.035172    6032 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/disk.qcow2
	I0307 19:50:04.035178    6032 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:50:04.035208    6032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:90:1d:dd:0a:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/disk.qcow2
	I0307 19:50:04.036964    6032 main.go:141] libmachine: STDOUT: 
	I0307 19:50:04.036979    6032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:04.036997    6032 client.go:171] duration metric: took 316.87275ms to LocalClient.Create
	I0307 19:50:06.039178    6032 start.go:128] duration metric: took 2.342873209s to createHost
	I0307 19:50:06.039280    6032 start.go:83] releasing machines lock for "bridge-963000", held for 2.343068s
	W0307 19:50:06.039330    6032 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:06.053543    6032 out.go:177] * Deleting "bridge-963000" in qemu2 ...
	W0307 19:50:06.079530    6032 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:06.079567    6032 start.go:728] Will try again in 5 seconds ...
	I0307 19:50:11.079544    6032 start.go:360] acquireMachinesLock for bridge-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:11.079631    6032 start.go:364] duration metric: took 72.833µs to acquireMachinesLock for "bridge-963000"
	I0307 19:50:11.079645    6032 start.go:93] Provisioning new machine with config: &{Name:bridge-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:50:11.079690    6032 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:50:11.091625    6032 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:50:11.107170    6032 start.go:159] libmachine.API.Create for "bridge-963000" (driver="qemu2")
	I0307 19:50:11.107199    6032 client.go:168] LocalClient.Create starting
	I0307 19:50:11.107265    6032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:50:11.107303    6032 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:11.107313    6032 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:11.107353    6032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:50:11.107377    6032 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:11.107383    6032 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:11.107669    6032 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:50:11.482048    6032 main.go:141] libmachine: Creating SSH key...
	I0307 19:50:11.626209    6032 main.go:141] libmachine: Creating Disk image...
	I0307 19:50:11.626220    6032 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:50:11.626406    6032 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/disk.qcow2
	I0307 19:50:11.638695    6032 main.go:141] libmachine: STDOUT: 
	I0307 19:50:11.638717    6032 main.go:141] libmachine: STDERR: 
	I0307 19:50:11.638775    6032 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/disk.qcow2 +20000M
	I0307 19:50:11.649708    6032 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:50:11.649726    6032 main.go:141] libmachine: STDERR: 
	I0307 19:50:11.649743    6032 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/disk.qcow2
	I0307 19:50:11.649747    6032 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:50:11.649776    6032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:f6:44:e8:b5:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/bridge-963000/disk.qcow2
	I0307 19:50:11.651526    6032 main.go:141] libmachine: STDOUT: 
	I0307 19:50:11.651542    6032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:11.651567    6032 client.go:171] duration metric: took 544.373959ms to LocalClient.Create
	I0307 19:50:13.653649    6032 start.go:128] duration metric: took 2.574029583s to createHost
	I0307 19:50:13.653730    6032 start.go:83] releasing machines lock for "bridge-963000", held for 2.574196167s
	W0307 19:50:13.653902    6032 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:13.661834    6032 out.go:177] 
	W0307 19:50:13.665962    6032 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:50:13.665975    6032 out.go:239] * 
	* 
	W0307 19:50:13.667196    6032 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:50:13.675905    6032 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-963000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.804434417s)

                                                
                                                
-- stdout --
	* [kubenet-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-963000" primary control-plane node in "kubenet-963000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-963000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:50:15.941349    6157 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:50:15.941460    6157 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:15.941463    6157 out.go:304] Setting ErrFile to fd 2...
	I0307 19:50:15.941466    6157 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:15.941603    6157 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:50:15.942668    6157 out.go:298] Setting JSON to false
	I0307 19:50:15.959160    6157 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4787,"bootTime":1709865028,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:50:15.959216    6157 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:50:15.965250    6157 out.go:177] * [kubenet-963000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:50:15.972269    6157 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:50:15.977301    6157 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:50:15.972306    6157 notify.go:220] Checking for updates...
	I0307 19:50:15.983267    6157 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:50:15.987307    6157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:50:15.990352    6157 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:50:15.993347    6157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:50:16.001696    6157 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:50:16.001763    6157 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 19:50:16.001806    6157 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:50:16.006358    6157 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:50:16.013255    6157 start.go:297] selected driver: qemu2
	I0307 19:50:16.013261    6157 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:50:16.013266    6157 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:50:16.015504    6157 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:50:16.018302    6157 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:50:16.021330    6157 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:50:16.021386    6157 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0307 19:50:16.021410    6157 start.go:340] cluster config:
	{Name:kubenet-963000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:50:16.025748    6157 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:16.033335    6157 out.go:177] * Starting "kubenet-963000" primary control-plane node in "kubenet-963000" cluster
	I0307 19:50:16.037260    6157 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:50:16.037274    6157 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:50:16.037285    6157 cache.go:56] Caching tarball of preloaded images
	I0307 19:50:16.037337    6157 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:50:16.037343    6157 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:50:16.037401    6157 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/kubenet-963000/config.json ...
	I0307 19:50:16.037411    6157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/kubenet-963000/config.json: {Name:mka20185fe38760f47a64a40f7be5c3b66b21a3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:50:16.037611    6157 start.go:360] acquireMachinesLock for kubenet-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:16.037641    6157 start.go:364] duration metric: took 24.291µs to acquireMachinesLock for "kubenet-963000"
	I0307 19:50:16.037651    6157 start.go:93] Provisioning new machine with config: &{Name:kubenet-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:50:16.037683    6157 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:50:16.046265    6157 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:50:16.063336    6157 start.go:159] libmachine.API.Create for "kubenet-963000" (driver="qemu2")
	I0307 19:50:16.063363    6157 client.go:168] LocalClient.Create starting
	I0307 19:50:16.063419    6157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:50:16.063454    6157 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:16.063466    6157 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:16.063515    6157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:50:16.063537    6157 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:16.063543    6157 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:16.063902    6157 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:50:16.203315    6157 main.go:141] libmachine: Creating SSH key...
	I0307 19:50:16.302978    6157 main.go:141] libmachine: Creating Disk image...
	I0307 19:50:16.302990    6157 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:50:16.303192    6157 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/disk.qcow2
	I0307 19:50:16.315791    6157 main.go:141] libmachine: STDOUT: 
	I0307 19:50:16.315812    6157 main.go:141] libmachine: STDERR: 
	I0307 19:50:16.315870    6157 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/disk.qcow2 +20000M
	I0307 19:50:16.326781    6157 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:50:16.326798    6157 main.go:141] libmachine: STDERR: 
	I0307 19:50:16.326821    6157 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/disk.qcow2
	I0307 19:50:16.326825    6157 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:50:16.326856    6157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c6:db:3a:17:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/disk.qcow2
	I0307 19:50:16.328632    6157 main.go:141] libmachine: STDOUT: 
	I0307 19:50:16.328648    6157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:16.328668    6157 client.go:171] duration metric: took 265.309292ms to LocalClient.Create
	I0307 19:50:18.330805    6157 start.go:128] duration metric: took 2.293189708s to createHost
	I0307 19:50:18.330889    6157 start.go:83] releasing machines lock for "kubenet-963000", held for 2.293334666s
	W0307 19:50:18.330959    6157 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:18.340254    6157 out.go:177] * Deleting "kubenet-963000" in qemu2 ...
	W0307 19:50:18.365900    6157 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:18.365949    6157 start.go:728] Will try again in 5 seconds ...
	I0307 19:50:23.367160    6157 start.go:360] acquireMachinesLock for kubenet-963000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:23.367568    6157 start.go:364] duration metric: took 300.375µs to acquireMachinesLock for "kubenet-963000"
	I0307 19:50:23.367693    6157 start.go:93] Provisioning new machine with config: &{Name:kubenet-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:50:23.367890    6157 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:50:23.374320    6157 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 19:50:23.410755    6157 start.go:159] libmachine.API.Create for "kubenet-963000" (driver="qemu2")
	I0307 19:50:23.410802    6157 client.go:168] LocalClient.Create starting
	I0307 19:50:23.410927    6157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:50:23.410987    6157 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:23.411006    6157 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:23.411065    6157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:50:23.411102    6157 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:23.411116    6157 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:23.411631    6157 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:50:23.558295    6157 main.go:141] libmachine: Creating SSH key...
	I0307 19:50:23.644682    6157 main.go:141] libmachine: Creating Disk image...
	I0307 19:50:23.644696    6157 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:50:23.644928    6157 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/disk.qcow2
	I0307 19:50:23.657501    6157 main.go:141] libmachine: STDOUT: 
	I0307 19:50:23.657523    6157 main.go:141] libmachine: STDERR: 
	I0307 19:50:23.657593    6157 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/disk.qcow2 +20000M
	I0307 19:50:23.668625    6157 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:50:23.668644    6157 main.go:141] libmachine: STDERR: 
	I0307 19:50:23.668657    6157 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/disk.qcow2
	I0307 19:50:23.668660    6157 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:50:23.668690    6157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:a1:81:a3:26:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/kubenet-963000/disk.qcow2
	I0307 19:50:23.670478    6157 main.go:141] libmachine: STDOUT: 
	I0307 19:50:23.670495    6157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:23.670506    6157 client.go:171] duration metric: took 259.711042ms to LocalClient.Create
	I0307 19:50:25.672639    6157 start.go:128] duration metric: took 2.304807375s to createHost
	I0307 19:50:25.672757    6157 start.go:83] releasing machines lock for "kubenet-963000", held for 2.305251833s
	W0307 19:50:25.673111    6157 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:25.682790    6157 out.go:177] 
	W0307 19:50:25.689833    6157 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:50:25.689869    6157 out.go:239] * 
	* 
	W0307 19:50:25.691577    6157 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:50:25.702802    6157 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-168000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-168000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.067423084s)

                                                
                                                
-- stdout --
	* [old-k8s-version-168000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-168000" primary control-plane node in "old-k8s-version-168000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-168000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:50:27.326971    6244 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:50:27.327221    6244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:27.327234    6244 out.go:304] Setting ErrFile to fd 2...
	I0307 19:50:27.327239    6244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:27.327373    6244 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:50:27.331650    6244 out.go:298] Setting JSON to false
	I0307 19:50:27.349133    6244 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4799,"bootTime":1709865028,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:50:27.349193    6244 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:50:27.352086    6244 out.go:177] * [old-k8s-version-168000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:50:27.358295    6244 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:50:27.358320    6244 notify.go:220] Checking for updates...
	I0307 19:50:27.363198    6244 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:50:27.366239    6244 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:50:27.369249    6244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:50:27.377218    6244 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:50:27.388234    6244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:50:27.395536    6244 config.go:182] Loaded profile config "kubenet-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:50:27.395607    6244 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:50:27.395669    6244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:50:27.407192    6244 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:50:27.410211    6244 start.go:297] selected driver: qemu2
	I0307 19:50:27.410218    6244 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:50:27.410224    6244 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:50:27.412964    6244 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:50:27.416172    6244 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:50:27.419232    6244 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:50:27.419271    6244 cni.go:84] Creating CNI manager for ""
	I0307 19:50:27.419278    6244 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 19:50:27.419308    6244 start.go:340] cluster config:
	{Name:old-k8s-version-168000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-168000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:50:27.424621    6244 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:27.432173    6244 out.go:177] * Starting "old-k8s-version-168000" primary control-plane node in "old-k8s-version-168000" cluster
	I0307 19:50:27.436326    6244 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 19:50:27.436350    6244 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 19:50:27.436360    6244 cache.go:56] Caching tarball of preloaded images
	I0307 19:50:27.436446    6244 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:50:27.436452    6244 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 19:50:27.436522    6244 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/old-k8s-version-168000/config.json ...
	I0307 19:50:27.436532    6244 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/old-k8s-version-168000/config.json: {Name:mk88e1697d36e4f3e59f35902ded3d9a452e58b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:50:27.440031    6244 start.go:360] acquireMachinesLock for old-k8s-version-168000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:27.440071    6244 start.go:364] duration metric: took 27.209µs to acquireMachinesLock for "old-k8s-version-168000"
	I0307 19:50:27.440081    6244 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-168000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-168000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:50:27.440110    6244 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:50:27.448151    6244 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:50:27.464042    6244 start.go:159] libmachine.API.Create for "old-k8s-version-168000" (driver="qemu2")
	I0307 19:50:27.464075    6244 client.go:168] LocalClient.Create starting
	I0307 19:50:27.464129    6244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:50:27.464155    6244 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:27.464165    6244 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:27.464209    6244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:50:27.464231    6244 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:27.464238    6244 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:27.464595    6244 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:50:27.709066    6244 main.go:141] libmachine: Creating SSH key...
	I0307 19:50:27.896257    6244 main.go:141] libmachine: Creating Disk image...
	I0307 19:50:27.896267    6244 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:50:27.896479    6244 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2
	I0307 19:50:27.911191    6244 main.go:141] libmachine: STDOUT: 
	I0307 19:50:27.911226    6244 main.go:141] libmachine: STDERR: 
	I0307 19:50:27.911281    6244 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2 +20000M
	I0307 19:50:27.926469    6244 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:50:27.926499    6244 main.go:141] libmachine: STDERR: 
	I0307 19:50:27.926525    6244 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2
	I0307 19:50:27.926543    6244 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:50:27.926603    6244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:e5:9a:d7:f8:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2
	I0307 19:50:27.928433    6244 main.go:141] libmachine: STDOUT: 
	I0307 19:50:27.928446    6244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:27.928475    6244 client.go:171] duration metric: took 464.413083ms to LocalClient.Create
	I0307 19:50:29.930625    6244 start.go:128] duration metric: took 2.49058475s to createHost
	I0307 19:50:29.930730    6244 start.go:83] releasing machines lock for "old-k8s-version-168000", held for 2.490750167s
	W0307 19:50:29.930816    6244 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:29.950990    6244 out.go:177] * Deleting "old-k8s-version-168000" in qemu2 ...
	W0307 19:50:29.972674    6244 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:29.972705    6244 start.go:728] Will try again in 5 seconds ...
	I0307 19:50:34.972781    6244 start.go:360] acquireMachinesLock for old-k8s-version-168000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:34.973262    6244 start.go:364] duration metric: took 359.666µs to acquireMachinesLock for "old-k8s-version-168000"
	I0307 19:50:34.973365    6244 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-168000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-168000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:50:34.973714    6244 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:50:34.989453    6244 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:50:35.038273    6244 start.go:159] libmachine.API.Create for "old-k8s-version-168000" (driver="qemu2")
	I0307 19:50:35.038602    6244 client.go:168] LocalClient.Create starting
	I0307 19:50:35.039701    6244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:50:35.040128    6244 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:35.040157    6244 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:35.040230    6244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:50:35.040284    6244 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:35.040297    6244 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:35.040900    6244 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:50:35.234444    6244 main.go:141] libmachine: Creating SSH key...
	I0307 19:50:35.296953    6244 main.go:141] libmachine: Creating Disk image...
	I0307 19:50:35.296959    6244 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:50:35.297140    6244 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2
	I0307 19:50:35.309954    6244 main.go:141] libmachine: STDOUT: 
	I0307 19:50:35.309972    6244 main.go:141] libmachine: STDERR: 
	I0307 19:50:35.310025    6244 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2 +20000M
	I0307 19:50:35.321031    6244 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:50:35.321055    6244 main.go:141] libmachine: STDERR: 
	I0307 19:50:35.321070    6244 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2
	I0307 19:50:35.321086    6244 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:50:35.321124    6244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:63:d5:b6:12:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2
	I0307 19:50:35.322866    6244 main.go:141] libmachine: STDOUT: 
	I0307 19:50:35.322883    6244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:35.322899    6244 client.go:171] duration metric: took 284.296292ms to LocalClient.Create
	I0307 19:50:37.324994    6244 start.go:128] duration metric: took 2.351343666s to createHost
	I0307 19:50:37.325047    6244 start.go:83] releasing machines lock for "old-k8s-version-168000", held for 2.351857125s
	W0307 19:50:37.325420    6244 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-168000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-168000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:37.335050    6244 out.go:177] 
	W0307 19:50:37.339358    6244 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:50:37.339466    6244 out.go:239] * 
	* 
	W0307 19:50:37.341957    6244 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:50:37.352089    6244 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-168000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000: exit status 7 (67.18075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-168000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-200000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-200000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (11.765598416s)

                                                
                                                
-- stdout --
	* [no-preload-200000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-200000" primary control-plane node in "no-preload-200000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-200000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:50:28.145764    6285 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:50:28.145902    6285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:28.145906    6285 out.go:304] Setting ErrFile to fd 2...
	I0307 19:50:28.145908    6285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:28.146042    6285 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:50:28.147070    6285 out.go:298] Setting JSON to false
	I0307 19:50:28.163125    6285 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4800,"bootTime":1709865028,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:50:28.163212    6285 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:50:28.169191    6285 out.go:177] * [no-preload-200000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:50:28.176192    6285 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:50:28.176225    6285 notify.go:220] Checking for updates...
	I0307 19:50:28.180186    6285 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:50:28.183136    6285 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:50:28.186214    6285 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:50:28.190140    6285 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:50:28.193183    6285 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:50:28.196468    6285 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:50:28.196538    6285 config.go:182] Loaded profile config "old-k8s-version-168000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0307 19:50:28.196583    6285 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:50:28.201072    6285 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:50:28.208170    6285 start.go:297] selected driver: qemu2
	I0307 19:50:28.208175    6285 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:50:28.208180    6285 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:50:28.210392    6285 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:50:28.213090    6285 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:50:28.217272    6285 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:50:28.217322    6285 cni.go:84] Creating CNI manager for ""
	I0307 19:50:28.217331    6285 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:50:28.217335    6285 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 19:50:28.217358    6285 start.go:340] cluster config:
	{Name:no-preload-200000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:50:28.221828    6285 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:28.229188    6285 out.go:177] * Starting "no-preload-200000" primary control-plane node in "no-preload-200000" cluster
	I0307 19:50:28.233121    6285 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 19:50:28.233203    6285 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/no-preload-200000/config.json ...
	I0307 19:50:28.233220    6285 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/no-preload-200000/config.json: {Name:mk456bb4497508ddcdc777310b952ce76ef193fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:50:28.233280    6285 cache.go:107] acquiring lock: {Name:mk24a195480de2a1058c401c7ae7b8cb3e1694e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:28.233270    6285 cache.go:107] acquiring lock: {Name:mk34194da6054361a9e7d4f09abbe1447f661b79 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:28.233321    6285 cache.go:107] acquiring lock: {Name:mk7afd2fc7b5bbb5798941441e3eefb3a268fdd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:28.233357    6285 cache.go:115] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0307 19:50:28.233364    6285 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 86.166µs
	I0307 19:50:28.233374    6285 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0307 19:50:28.233381    6285 cache.go:107] acquiring lock: {Name:mk53c5c7ca3490e2fca66e564eb154b0752a7025 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:28.233282    6285 cache.go:107] acquiring lock: {Name:mk83093c54ee396f17320e4983486ec93f8367cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:28.233507    6285 start.go:360] acquireMachinesLock for no-preload-200000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:28.233520    6285 cache.go:107] acquiring lock: {Name:mkff5f77c2a982c2733104c1480a077f945332e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:28.233528    6285 cache.go:107] acquiring lock: {Name:mked7ec58d004def7f3a4eed28c3d3116ef99439 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:28.233572    6285 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0307 19:50:28.233572    6285 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0307 19:50:28.233601    6285 cache.go:107] acquiring lock: {Name:mkd4eb9ff64245fd8edab8d1120d99e1a958b9be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:28.233706    6285 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0307 19:50:28.233748    6285 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0307 19:50:28.233763    6285 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0307 19:50:28.233776    6285 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0307 19:50:28.233881    6285 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0307 19:50:28.238791    6285 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0307 19:50:28.239595    6285 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0307 19:50:28.239621    6285 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0307 19:50:28.239684    6285 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0307 19:50:28.239694    6285 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0307 19:50:28.239751    6285 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0307 19:50:28.239800    6285 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0307 19:50:29.930892    6285 start.go:364] duration metric: took 1.697405833s to acquireMachinesLock for "no-preload-200000"
	I0307 19:50:29.931050    6285 start.go:93] Provisioning new machine with config: &{Name:no-preload-200000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:50:29.931279    6285 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:50:29.941927    6285 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:50:29.994210    6285 start.go:159] libmachine.API.Create for "no-preload-200000" (driver="qemu2")
	I0307 19:50:29.994256    6285 client.go:168] LocalClient.Create starting
	I0307 19:50:29.994393    6285 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:50:29.994444    6285 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:29.994464    6285 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:29.994532    6285 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:50:29.994578    6285 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:29.994592    6285 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:29.995231    6285 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:50:30.143875    6285 main.go:141] libmachine: Creating SSH key...
	I0307 19:50:30.179831    6285 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0307 19:50:30.295198    6285 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0307 19:50:30.310425    6285 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0307 19:50:30.311300    6285 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0307 19:50:30.315938    6285 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0307 19:50:30.321775    6285 main.go:141] libmachine: Creating Disk image...
	I0307 19:50:30.321781    6285 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:50:30.321967    6285 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2
	I0307 19:50:30.331414    6285 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0307 19:50:30.334862    6285 main.go:141] libmachine: STDOUT: 
	I0307 19:50:30.334873    6285 main.go:141] libmachine: STDERR: 
	I0307 19:50:30.334915    6285 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2 +20000M
	I0307 19:50:30.345348    6285 cache.go:162] opening:  /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0307 19:50:30.346216    6285 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:50:30.346224    6285 main.go:141] libmachine: STDERR: 
	I0307 19:50:30.346235    6285 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2
	I0307 19:50:30.346238    6285 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:50:30.346271    6285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:39:46:07:ea:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2
	I0307 19:50:30.348233    6285 main.go:141] libmachine: STDOUT: 
	I0307 19:50:30.348251    6285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:30.348266    6285 client.go:171] duration metric: took 354.018333ms to LocalClient.Create
	I0307 19:50:30.394640    6285 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0307 19:50:30.394651    6285 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.1614185s
	I0307 19:50:30.394656    6285 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0307 19:50:32.024192    6285 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0307 19:50:32.024249    6285 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.791017666s
	I0307 19:50:32.024276    6285 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0307 19:50:32.348459    6285 start.go:128] duration metric: took 2.417198375s to createHost
	I0307 19:50:32.348561    6285 start.go:83] releasing machines lock for "no-preload-200000", held for 2.417673666s
	W0307 19:50:32.348616    6285 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:32.358917    6285 out.go:177] * Deleting "no-preload-200000" in qemu2 ...
	W0307 19:50:32.384369    6285 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:32.384403    6285 start.go:728] Will try again in 5 seconds ...
	I0307 19:50:34.041419    6285 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0307 19:50:34.041470    6285 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 5.808197458s
	I0307 19:50:34.041505    6285 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0307 19:50:34.258647    6285 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0307 19:50:34.258726    6285 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 6.025699375s
	I0307 19:50:34.258770    6285 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0307 19:50:34.685285    6285 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0307 19:50:34.685334    6285 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 6.452025833s
	I0307 19:50:34.685362    6285 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0307 19:50:35.767515    6285 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0307 19:50:35.767561    6285 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 7.534614167s
	I0307 19:50:35.767585    6285 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0307 19:50:36.958540    6285 cache.go:157] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0307 19:50:36.958591    6285 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 8.72552025s
	I0307 19:50:36.958614    6285 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0307 19:50:36.958661    6285 cache.go:87] Successfully saved all images to host disk.
	I0307 19:50:37.384740    6285 start.go:360] acquireMachinesLock for no-preload-200000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:37.384901    6285 start.go:364] duration metric: took 116.25µs to acquireMachinesLock for "no-preload-200000"
	I0307 19:50:37.384941    6285 start.go:93] Provisioning new machine with config: &{Name:no-preload-200000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:50:37.385034    6285 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:50:37.394031    6285 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:50:37.424286    6285 start.go:159] libmachine.API.Create for "no-preload-200000" (driver="qemu2")
	I0307 19:50:37.424336    6285 client.go:168] LocalClient.Create starting
	I0307 19:50:37.424416    6285 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:50:37.424450    6285 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:37.424463    6285 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:37.424516    6285 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:50:37.424538    6285 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:37.424550    6285 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:37.424941    6285 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:50:37.673508    6285 main.go:141] libmachine: Creating SSH key...
	I0307 19:50:37.811069    6285 main.go:141] libmachine: Creating Disk image...
	I0307 19:50:37.811077    6285 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:50:37.811275    6285 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2
	I0307 19:50:37.823725    6285 main.go:141] libmachine: STDOUT: 
	I0307 19:50:37.823750    6285 main.go:141] libmachine: STDERR: 
	I0307 19:50:37.823815    6285 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2 +20000M
	I0307 19:50:37.835004    6285 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:50:37.835032    6285 main.go:141] libmachine: STDERR: 
	I0307 19:50:37.835044    6285 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2
	I0307 19:50:37.835047    6285 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:50:37.835098    6285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:98:ea:78:a3:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2
	I0307 19:50:37.836923    6285 main.go:141] libmachine: STDOUT: 
	I0307 19:50:37.836945    6285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:37.836962    6285 client.go:171] duration metric: took 412.635834ms to LocalClient.Create
	I0307 19:50:39.839072    6285 start.go:128] duration metric: took 2.454112042s to createHost
	I0307 19:50:39.839161    6285 start.go:83] releasing machines lock for "no-preload-200000", held for 2.454344334s
	W0307 19:50:39.839603    6285 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-200000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-200000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:39.848283    6285 out.go:177] 
	W0307 19:50:39.853467    6285 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:50:39.853522    6285 out.go:239] * 
	* 
	W0307 19:50:39.856708    6285 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:50:39.866238    6285 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-200000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000: exit status 7 (65.552042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-200000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (11.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-168000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-168000 create -f testdata/busybox.yaml: exit status 1 (30.94925ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-168000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-168000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000: exit status 7 (35.058834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-168000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000: exit status 7 (34.451875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-168000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-168000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-168000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-168000 describe deploy/metrics-server -n kube-system: exit status 1 (28.656625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-168000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-168000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000: exit status 7 (33.462667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-168000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-200000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-200000 create -f testdata/busybox.yaml: exit status 1 (31.196875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-200000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-200000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000: exit status 7 (30.843875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-200000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000: exit status 7 (30.47125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-200000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-200000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-200000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-200000 describe deploy/metrics-server -n kube-system: exit status 1 (26.544875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-200000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-200000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000: exit status 7 (31.205375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-200000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-168000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-168000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.181220542s)

                                                
                                                
-- stdout --
	* [old-k8s-version-168000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-168000" primary control-plane node in "old-k8s-version-168000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-168000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-168000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:50:41.417391    6404 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:50:41.417529    6404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:41.417532    6404 out.go:304] Setting ErrFile to fd 2...
	I0307 19:50:41.417535    6404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:41.417649    6404 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:50:41.418656    6404 out.go:298] Setting JSON to false
	I0307 19:50:41.434877    6404 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4813,"bootTime":1709865028,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:50:41.434936    6404 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:50:41.440415    6404 out.go:177] * [old-k8s-version-168000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:50:41.446319    6404 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:50:41.446371    6404 notify.go:220] Checking for updates...
	I0307 19:50:41.454174    6404 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:50:41.457244    6404 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:50:41.460342    6404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:50:41.463297    6404 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:50:41.466329    6404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:50:41.469567    6404 config.go:182] Loaded profile config "old-k8s-version-168000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0307 19:50:41.473254    6404 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 19:50:41.476287    6404 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:50:41.480316    6404 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:50:41.487289    6404 start.go:297] selected driver: qemu2
	I0307 19:50:41.487295    6404 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-168000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-168000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:50:41.487365    6404 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:50:41.489751    6404 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:50:41.489799    6404 cni.go:84] Creating CNI manager for ""
	I0307 19:50:41.489807    6404 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 19:50:41.489843    6404 start.go:340] cluster config:
	{Name:old-k8s-version-168000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-168000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:50:41.494615    6404 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:41.502276    6404 out.go:177] * Starting "old-k8s-version-168000" primary control-plane node in "old-k8s-version-168000" cluster
	I0307 19:50:41.507324    6404 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 19:50:41.507347    6404 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 19:50:41.507362    6404 cache.go:56] Caching tarball of preloaded images
	I0307 19:50:41.507450    6404 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:50:41.507460    6404 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 19:50:41.507529    6404 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/old-k8s-version-168000/config.json ...
	I0307 19:50:41.508043    6404 start.go:360] acquireMachinesLock for old-k8s-version-168000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:41.508082    6404 start.go:364] duration metric: took 31.541µs to acquireMachinesLock for "old-k8s-version-168000"
	I0307 19:50:41.508091    6404 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:50:41.508095    6404 fix.go:54] fixHost starting: 
	I0307 19:50:41.508231    6404 fix.go:112] recreateIfNeeded on old-k8s-version-168000: state=Stopped err=<nil>
	W0307 19:50:41.508241    6404 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:50:41.511361    6404 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-168000" ...
	I0307 19:50:41.519317    6404 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:63:d5:b6:12:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2
	I0307 19:50:41.521420    6404 main.go:141] libmachine: STDOUT: 
	I0307 19:50:41.521439    6404 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:41.521469    6404 fix.go:56] duration metric: took 13.372542ms for fixHost
	I0307 19:50:41.521474    6404 start.go:83] releasing machines lock for "old-k8s-version-168000", held for 13.387541ms
	W0307 19:50:41.521482    6404 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:50:41.521512    6404 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:41.521517    6404 start.go:728] Will try again in 5 seconds ...
	I0307 19:50:46.523442    6404 start.go:360] acquireMachinesLock for old-k8s-version-168000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:46.523574    6404 start.go:364] duration metric: took 100µs to acquireMachinesLock for "old-k8s-version-168000"
	I0307 19:50:46.523619    6404 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:50:46.523627    6404 fix.go:54] fixHost starting: 
	I0307 19:50:46.523894    6404 fix.go:112] recreateIfNeeded on old-k8s-version-168000: state=Stopped err=<nil>
	W0307 19:50:46.523905    6404 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:50:46.527235    6404 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-168000" ...
	I0307 19:50:46.533092    6404 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:63:d5:b6:12:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/old-k8s-version-168000/disk.qcow2
	I0307 19:50:46.537270    6404 main.go:141] libmachine: STDOUT: 
	I0307 19:50:46.537303    6404 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:46.537337    6404 fix.go:56] duration metric: took 13.711209ms for fixHost
	I0307 19:50:46.537345    6404 start.go:83] releasing machines lock for "old-k8s-version-168000", held for 13.761042ms
	W0307 19:50:46.537422    6404 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-168000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-168000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:46.544047    6404 out.go:177] 
	W0307 19:50:46.547144    6404 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:50:46.547154    6404 out.go:239] * 
	* 
	W0307 19:50:46.548138    6404 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:50:46.559107    6404 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-168000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000: exit status 7 (54.056541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-168000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-200000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
E0307 19:50:44.634176    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-200000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.627125292s)

                                                
                                                
-- stdout --
	* [no-preload-200000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-200000" primary control-plane node in "no-preload-200000" cluster
	* Restarting existing qemu2 VM for "no-preload-200000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-200000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:50:44.087726    6425 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:50:44.087849    6425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:44.087852    6425 out.go:304] Setting ErrFile to fd 2...
	I0307 19:50:44.087855    6425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:44.087970    6425 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:50:44.088970    6425 out.go:298] Setting JSON to false
	I0307 19:50:44.104989    6425 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4816,"bootTime":1709865028,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:50:44.105052    6425 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:50:44.108974    6425 out.go:177] * [no-preload-200000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:50:44.114920    6425 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:50:44.118954    6425 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:50:44.114945    6425 notify.go:220] Checking for updates...
	I0307 19:50:44.125936    6425 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:50:44.128902    6425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:50:44.131963    6425 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:50:44.134884    6425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:50:44.138276    6425 config.go:182] Loaded profile config "no-preload-200000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 19:50:44.138523    6425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:50:44.142928    6425 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:50:44.149905    6425 start.go:297] selected driver: qemu2
	I0307 19:50:44.149911    6425 start.go:901] validating driver "qemu2" against &{Name:no-preload-200000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:50:44.149961    6425 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:50:44.152228    6425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:50:44.152269    6425 cni.go:84] Creating CNI manager for ""
	I0307 19:50:44.152276    6425 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:50:44.152308    6425 start.go:340] cluster config:
	{Name:no-preload-200000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-200000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:50:44.156634    6425 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:44.163897    6425 out.go:177] * Starting "no-preload-200000" primary control-plane node in "no-preload-200000" cluster
	I0307 19:50:44.167879    6425 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 19:50:44.167960    6425 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/no-preload-200000/config.json ...
	I0307 19:50:44.168015    6425 cache.go:107] acquiring lock: {Name:mk24a195480de2a1058c401c7ae7b8cb3e1694e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:44.168023    6425 cache.go:107] acquiring lock: {Name:mk83093c54ee396f17320e4983486ec93f8367cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:44.168028    6425 cache.go:107] acquiring lock: {Name:mk34194da6054361a9e7d4f09abbe1447f661b79 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:44.168092    6425 cache.go:115] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0307 19:50:44.168116    6425 cache.go:115] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0307 19:50:44.168115    6425 cache.go:115] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0307 19:50:44.168122    6425 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 111.291µs
	I0307 19:50:44.168124    6425 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 105.166µs
	I0307 19:50:44.168130    6425 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 92.583µs
	I0307 19:50:44.168133    6425 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0307 19:50:44.168136    6425 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0307 19:50:44.168132    6425 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0307 19:50:44.168139    6425 cache.go:107] acquiring lock: {Name:mked7ec58d004def7f3a4eed28c3d3116ef99439 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:44.168143    6425 cache.go:107] acquiring lock: {Name:mkff5f77c2a982c2733104c1480a077f945332e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:44.168145    6425 cache.go:107] acquiring lock: {Name:mk53c5c7ca3490e2fca66e564eb154b0752a7025 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:44.168187    6425 cache.go:115] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0307 19:50:44.168194    6425 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 53.5µs
	I0307 19:50:44.168198    6425 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0307 19:50:44.168199    6425 cache.go:107] acquiring lock: {Name:mkd4eb9ff64245fd8edab8d1120d99e1a958b9be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:44.168227    6425 cache.go:115] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0307 19:50:44.168229    6425 cache.go:115] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0307 19:50:44.168232    6425 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 93.958µs
	I0307 19:50:44.168233    6425 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 89.25µs
	I0307 19:50:44.168236    6425 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0307 19:50:44.168237    6425 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0307 19:50:44.168251    6425 cache.go:115] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0307 19:50:44.168177    6425 cache.go:107] acquiring lock: {Name:mk7afd2fc7b5bbb5798941441e3eefb3a268fdd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:44.168256    6425 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 75.042µs
	I0307 19:50:44.168261    6425 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0307 19:50:44.168290    6425 cache.go:115] /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0307 19:50:44.168295    6425 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 130.792µs
	I0307 19:50:44.168299    6425 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0307 19:50:44.168305    6425 cache.go:87] Successfully saved all images to host disk.
	I0307 19:50:44.168461    6425 start.go:360] acquireMachinesLock for no-preload-200000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:44.168497    6425 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "no-preload-200000"
	I0307 19:50:44.168506    6425 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:50:44.168510    6425 fix.go:54] fixHost starting: 
	I0307 19:50:44.168642    6425 fix.go:112] recreateIfNeeded on no-preload-200000: state=Stopped err=<nil>
	W0307 19:50:44.168653    6425 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:50:44.176969    6425 out.go:177] * Restarting existing qemu2 VM for "no-preload-200000" ...
	I0307 19:50:44.180787    6425 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:98:ea:78:a3:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2
	I0307 19:50:44.182867    6425 main.go:141] libmachine: STDOUT: 
	I0307 19:50:44.182886    6425 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:44.182916    6425 fix.go:56] duration metric: took 14.405291ms for fixHost
	I0307 19:50:44.182921    6425 start.go:83] releasing machines lock for "no-preload-200000", held for 14.419875ms
	W0307 19:50:44.182930    6425 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:50:44.182961    6425 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:44.182966    6425 start.go:728] Will try again in 5 seconds ...
	I0307 19:50:49.184394    6425 start.go:360] acquireMachinesLock for no-preload-200000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:49.607306    6425 start.go:364] duration metric: took 422.797458ms to acquireMachinesLock for "no-preload-200000"
	I0307 19:50:49.607386    6425 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:50:49.607405    6425 fix.go:54] fixHost starting: 
	I0307 19:50:49.608071    6425 fix.go:112] recreateIfNeeded on no-preload-200000: state=Stopped err=<nil>
	W0307 19:50:49.608100    6425 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:50:49.617517    6425 out.go:177] * Restarting existing qemu2 VM for "no-preload-200000" ...
	I0307 19:50:49.630578    6425 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:98:ea:78:a3:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/no-preload-200000/disk.qcow2
	I0307 19:50:49.641153    6425 main.go:141] libmachine: STDOUT: 
	I0307 19:50:49.641229    6425 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:49.641304    6425 fix.go:56] duration metric: took 33.897834ms for fixHost
	I0307 19:50:49.641323    6425 start.go:83] releasing machines lock for "no-preload-200000", held for 33.976792ms
	W0307 19:50:49.641555    6425 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-200000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-200000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:49.650504    6425 out.go:177] 
	W0307 19:50:49.654677    6425 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:50:49.654712    6425 out.go:239] * 
	* 
	W0307 19:50:49.656863    6425 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:50:49.669571    6425 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-200000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000: exit status 7 (66.995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-200000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-168000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000: exit status 7 (31.676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-168000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-168000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-168000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-168000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.045583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-168000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-168000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000: exit status 7 (31.087667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-168000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-168000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000: exit status 7 (31.030125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-168000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-168000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-168000 --alsologtostderr -v=1: exit status 83 (42.412ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-168000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-168000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:50:46.817384    6445 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:50:46.817787    6445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:46.817790    6445 out.go:304] Setting ErrFile to fd 2...
	I0307 19:50:46.817793    6445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:46.817947    6445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:50:46.818155    6445 out.go:298] Setting JSON to false
	I0307 19:50:46.818165    6445 mustload.go:65] Loading cluster: old-k8s-version-168000
	I0307 19:50:46.818357    6445 config.go:182] Loaded profile config "old-k8s-version-168000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0307 19:50:46.821993    6445 out.go:177] * The control-plane node old-k8s-version-168000 host is not running: state=Stopped
	I0307 19:50:46.824979    6445 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-168000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-168000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000: exit status 7 (30.803416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-168000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000: exit status 7 (30.770375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-168000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-612000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-612000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.882640875s)

                                                
                                                
-- stdout --
	* [embed-certs-612000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-612000" primary control-plane node in "embed-certs-612000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-612000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:50:47.301338    6468 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:50:47.301464    6468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:47.301471    6468 out.go:304] Setting ErrFile to fd 2...
	I0307 19:50:47.301473    6468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:47.301603    6468 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:50:47.302692    6468 out.go:298] Setting JSON to false
	I0307 19:50:47.318633    6468 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4819,"bootTime":1709865028,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:50:47.318700    6468 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:50:47.323497    6468 out.go:177] * [embed-certs-612000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:50:47.331463    6468 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:50:47.331490    6468 notify.go:220] Checking for updates...
	I0307 19:50:47.338406    6468 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:50:47.341478    6468 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:50:47.344417    6468 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:50:47.347399    6468 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:50:47.350438    6468 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:50:47.353760    6468 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:50:47.353820    6468 config.go:182] Loaded profile config "no-preload-200000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 19:50:47.353876    6468 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:50:47.358327    6468 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:50:47.365413    6468 start.go:297] selected driver: qemu2
	I0307 19:50:47.365421    6468 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:50:47.365428    6468 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:50:47.367677    6468 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:50:47.370434    6468 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:50:47.373495    6468 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:50:47.373552    6468 cni.go:84] Creating CNI manager for ""
	I0307 19:50:47.373561    6468 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:50:47.373571    6468 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 19:50:47.373595    6468 start.go:340] cluster config:
	{Name:embed-certs-612000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:50:47.378260    6468 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:47.383430    6468 out.go:177] * Starting "embed-certs-612000" primary control-plane node in "embed-certs-612000" cluster
	I0307 19:50:47.387416    6468 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:50:47.387432    6468 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:50:47.387445    6468 cache.go:56] Caching tarball of preloaded images
	I0307 19:50:47.387515    6468 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:50:47.387521    6468 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:50:47.387594    6468 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/embed-certs-612000/config.json ...
	I0307 19:50:47.387606    6468 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/embed-certs-612000/config.json: {Name:mkb3987d5c69fa5c7510cda65e43127840127505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:50:47.387825    6468 start.go:360] acquireMachinesLock for embed-certs-612000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:47.387857    6468 start.go:364] duration metric: took 26.583µs to acquireMachinesLock for "embed-certs-612000"
	I0307 19:50:47.387868    6468 start.go:93] Provisioning new machine with config: &{Name:embed-certs-612000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:50:47.387902    6468 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:50:47.391398    6468 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:50:47.409822    6468 start.go:159] libmachine.API.Create for "embed-certs-612000" (driver="qemu2")
	I0307 19:50:47.409850    6468 client.go:168] LocalClient.Create starting
	I0307 19:50:47.409915    6468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:50:47.409943    6468 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:47.409954    6468 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:47.409998    6468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:50:47.410021    6468 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:47.410028    6468 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:47.410395    6468 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:50:47.549325    6468 main.go:141] libmachine: Creating SSH key...
	I0307 19:50:47.579385    6468 main.go:141] libmachine: Creating Disk image...
	I0307 19:50:47.579390    6468 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:50:47.579570    6468 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2
	I0307 19:50:47.592059    6468 main.go:141] libmachine: STDOUT: 
	I0307 19:50:47.592079    6468 main.go:141] libmachine: STDERR: 
	I0307 19:50:47.592128    6468 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2 +20000M
	I0307 19:50:47.603143    6468 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:50:47.603158    6468 main.go:141] libmachine: STDERR: 
	I0307 19:50:47.603173    6468 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2
	I0307 19:50:47.603177    6468 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:50:47.603210    6468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:17:49:2f:e6:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2
	I0307 19:50:47.604907    6468 main.go:141] libmachine: STDOUT: 
	I0307 19:50:47.604922    6468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:47.604942    6468 client.go:171] duration metric: took 195.092667ms to LocalClient.Create
	I0307 19:50:49.607064    6468 start.go:128] duration metric: took 2.219225333s to createHost
	I0307 19:50:49.607142    6468 start.go:83] releasing machines lock for "embed-certs-612000", held for 2.21936575s
	W0307 19:50:49.607212    6468 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:49.626583    6468 out.go:177] * Deleting "embed-certs-612000" in qemu2 ...
	W0307 19:50:49.680919    6468 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:49.680982    6468 start.go:728] Will try again in 5 seconds ...
	I0307 19:50:54.682116    6468 start.go:360] acquireMachinesLock for embed-certs-612000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:54.682611    6468 start.go:364] duration metric: took 380.042µs to acquireMachinesLock for "embed-certs-612000"
	I0307 19:50:54.682784    6468 start.go:93] Provisioning new machine with config: &{Name:embed-certs-612000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:50:54.683136    6468 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:50:54.693676    6468 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:50:54.743239    6468 start.go:159] libmachine.API.Create for "embed-certs-612000" (driver="qemu2")
	I0307 19:50:54.743298    6468 client.go:168] LocalClient.Create starting
	I0307 19:50:54.743405    6468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:50:54.743456    6468 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:54.743472    6468 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:54.743543    6468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:50:54.743584    6468 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:54.743594    6468 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:54.744832    6468 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:50:54.902623    6468 main.go:141] libmachine: Creating SSH key...
	I0307 19:50:55.082096    6468 main.go:141] libmachine: Creating Disk image...
	I0307 19:50:55.082109    6468 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:50:55.082306    6468 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2
	I0307 19:50:55.095244    6468 main.go:141] libmachine: STDOUT: 
	I0307 19:50:55.095263    6468 main.go:141] libmachine: STDERR: 
	I0307 19:50:55.095321    6468 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2 +20000M
	I0307 19:50:55.106180    6468 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:50:55.106196    6468 main.go:141] libmachine: STDERR: 
	I0307 19:50:55.106213    6468 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2
	I0307 19:50:55.106218    6468 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:50:55.106246    6468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:66:45:16:8d:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2
	I0307 19:50:55.107950    6468 main.go:141] libmachine: STDOUT: 
	I0307 19:50:55.107964    6468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:55.107977    6468 client.go:171] duration metric: took 364.687292ms to LocalClient.Create
	I0307 19:50:57.110092    6468 start.go:128] duration metric: took 2.427022292s to createHost
	I0307 19:50:57.110186    6468 start.go:83] releasing machines lock for "embed-certs-612000", held for 2.427617166s
	W0307 19:50:57.110563    6468 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-612000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-612000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:57.122320    6468 out.go:177] 
	W0307 19:50:57.126362    6468 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:50:57.126417    6468 out.go:239] * 
	* 
	W0307 19:50:57.129115    6468 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:50:57.139259    6468 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-612000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000: exit status 7 (66.702459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-612000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-200000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000: exit status 7 (33.725167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-200000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-200000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-200000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-200000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.713375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-200000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-200000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000: exit status 7 (30.773791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-200000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-200000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000: exit status 7 (30.58775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-200000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-200000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-200000 --alsologtostderr -v=1: exit status 83 (41.8515ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-200000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-200000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:50:49.946905    6490 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:50:49.947031    6490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:49.947034    6490 out.go:304] Setting ErrFile to fd 2...
	I0307 19:50:49.947036    6490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:49.947167    6490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:50:49.947396    6490 out.go:298] Setting JSON to false
	I0307 19:50:49.947406    6490 mustload.go:65] Loading cluster: no-preload-200000
	I0307 19:50:49.947582    6490 config.go:182] Loaded profile config "no-preload-200000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 19:50:49.950994    6490 out.go:177] * The control-plane node no-preload-200000 host is not running: state=Stopped
	I0307 19:50:49.955000    6490 out.go:177]   To start a cluster, run: "minikube start -p no-preload-200000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-200000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000: exit status 7 (30.591ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-200000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000: exit status 7 (30.99875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-200000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-156000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-156000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.731225167s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-156000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-156000" primary control-plane node in "default-k8s-diff-port-156000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-156000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:50:50.659157    6525 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:50:50.659287    6525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:50.659291    6525 out.go:304] Setting ErrFile to fd 2...
	I0307 19:50:50.659294    6525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:50:50.659509    6525 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:50:50.660978    6525 out.go:298] Setting JSON to false
	I0307 19:50:50.677183    6525 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4822,"bootTime":1709865028,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:50:50.677245    6525 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:50:50.679695    6525 out.go:177] * [default-k8s-diff-port-156000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:50:50.687099    6525 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:50:50.687146    6525 notify.go:220] Checking for updates...
	I0307 19:50:50.689984    6525 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:50:50.693993    6525 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:50:50.697009    6525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:50:50.699897    6525 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:50:50.702970    6525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:50:50.706439    6525 config.go:182] Loaded profile config "embed-certs-612000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:50:50.706496    6525 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:50:50.706559    6525 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:50:50.709969    6525 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:50:50.716994    6525 start.go:297] selected driver: qemu2
	I0307 19:50:50.717002    6525 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:50:50.717009    6525 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:50:50.719259    6525 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 19:50:50.720878    6525 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:50:50.724085    6525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:50:50.724132    6525 cni.go:84] Creating CNI manager for ""
	I0307 19:50:50.724141    6525 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:50:50.724146    6525 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 19:50:50.724189    6525 start.go:340] cluster config:
	{Name:default-k8s-diff-port-156000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-156000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:50:50.728646    6525 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:50:50.735959    6525 out.go:177] * Starting "default-k8s-diff-port-156000" primary control-plane node in "default-k8s-diff-port-156000" cluster
	I0307 19:50:50.739997    6525 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:50:50.740009    6525 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:50:50.740020    6525 cache.go:56] Caching tarball of preloaded images
	I0307 19:50:50.740071    6525 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:50:50.740077    6525 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:50:50.740139    6525 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/default-k8s-diff-port-156000/config.json ...
	I0307 19:50:50.740151    6525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/default-k8s-diff-port-156000/config.json: {Name:mkced15231edb84352064efb36701856181aff38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:50:50.740368    6525 start.go:360] acquireMachinesLock for default-k8s-diff-port-156000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:50.740404    6525 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "default-k8s-diff-port-156000"
	I0307 19:50:50.740416    6525 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-156000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-156000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:50:50.740446    6525 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:50:50.747976    6525 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:50:50.765725    6525 start.go:159] libmachine.API.Create for "default-k8s-diff-port-156000" (driver="qemu2")
	I0307 19:50:50.765757    6525 client.go:168] LocalClient.Create starting
	I0307 19:50:50.765816    6525 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:50:50.765845    6525 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:50.765859    6525 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:50.765906    6525 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:50:50.765929    6525 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:50.765937    6525 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:50.766302    6525 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:50:50.904583    6525 main.go:141] libmachine: Creating SSH key...
	I0307 19:50:50.944699    6525 main.go:141] libmachine: Creating Disk image...
	I0307 19:50:50.944705    6525 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:50:50.944868    6525 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2
	I0307 19:50:50.957269    6525 main.go:141] libmachine: STDOUT: 
	I0307 19:50:50.957293    6525 main.go:141] libmachine: STDERR: 
	I0307 19:50:50.957352    6525 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2 +20000M
	I0307 19:50:50.968139    6525 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:50:50.968156    6525 main.go:141] libmachine: STDERR: 
	I0307 19:50:50.968172    6525 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2
	I0307 19:50:50.968176    6525 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:50:50.968204    6525 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:ea:b0:b2:40:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2
	I0307 19:50:50.969956    6525 main.go:141] libmachine: STDOUT: 
	I0307 19:50:50.969973    6525 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:50.969995    6525 client.go:171] duration metric: took 204.239ms to LocalClient.Create
	I0307 19:50:52.972143    6525 start.go:128] duration metric: took 2.2317555s to createHost
	I0307 19:50:52.972226    6525 start.go:83] releasing machines lock for "default-k8s-diff-port-156000", held for 2.231903083s
	W0307 19:50:52.972292    6525 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:52.982337    6525 out.go:177] * Deleting "default-k8s-diff-port-156000" in qemu2 ...
	W0307 19:50:53.011827    6525 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:50:53.011857    6525 start.go:728] Will try again in 5 seconds ...
	I0307 19:50:58.013262    6525 start.go:360] acquireMachinesLock for default-k8s-diff-port-156000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:50:58.013617    6525 start.go:364] duration metric: took 266.667µs to acquireMachinesLock for "default-k8s-diff-port-156000"
	I0307 19:50:58.013793    6525 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-156000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-156000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:50:58.014064    6525 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:50:58.023734    6525 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:50:58.073501    6525 start.go:159] libmachine.API.Create for "default-k8s-diff-port-156000" (driver="qemu2")
	I0307 19:50:58.073552    6525 client.go:168] LocalClient.Create starting
	I0307 19:50:58.073656    6525 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:50:58.073714    6525 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:58.073730    6525 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:58.073789    6525 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:50:58.073816    6525 main.go:141] libmachine: Decoding PEM data...
	I0307 19:50:58.073830    6525 main.go:141] libmachine: Parsing certificate...
	I0307 19:50:58.074366    6525 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:50:58.221804    6525 main.go:141] libmachine: Creating SSH key...
	I0307 19:50:58.286933    6525 main.go:141] libmachine: Creating Disk image...
	I0307 19:50:58.286938    6525 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:50:58.287130    6525 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2
	I0307 19:50:58.299877    6525 main.go:141] libmachine: STDOUT: 
	I0307 19:50:58.299984    6525 main.go:141] libmachine: STDERR: 
	I0307 19:50:58.300047    6525 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2 +20000M
	I0307 19:50:58.311203    6525 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:50:58.311320    6525 main.go:141] libmachine: STDERR: 
	I0307 19:50:58.311336    6525 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2
	I0307 19:50:58.311342    6525 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:50:58.311370    6525 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:83:7c:c8:92:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2
	I0307 19:50:58.313185    6525 main.go:141] libmachine: STDOUT: 
	I0307 19:50:58.313199    6525 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:50:58.313212    6525 client.go:171] duration metric: took 239.663208ms to LocalClient.Create
	I0307 19:51:00.315337    6525 start.go:128] duration metric: took 2.301326083s to createHost
	I0307 19:51:00.315562    6525 start.go:83] releasing machines lock for "default-k8s-diff-port-156000", held for 2.301891417s
	W0307 19:51:00.315994    6525 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-156000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-156000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:51:00.330685    6525 out.go:177] 
	W0307 19:51:00.334886    6525 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:51:00.334924    6525 out.go:239] * 
	* 
	W0307 19:51:00.337460    6525 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:51:00.349651    6525 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-156000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000: exit status 7 (67.51475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-156000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-612000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-612000 create -f testdata/busybox.yaml: exit status 1 (29.397292ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-612000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-612000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000: exit status 7 (30.7525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-612000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000: exit status 7 (31.114542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-612000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-612000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-612000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-612000 describe deploy/metrics-server -n kube-system: exit status 1 (27.555209ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-612000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-612000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000: exit status 7 (31.107334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-612000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-156000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-156000 create -f testdata/busybox.yaml: exit status 1 (29.871708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-156000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-156000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000: exit status 7 (31.216958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-156000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000: exit status 7 (30.951125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-156000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-156000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-156000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-156000 describe deploy/metrics-server -n kube-system: exit status 1 (27.213291ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-156000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-156000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000: exit status 7 (31.58975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-156000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-612000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-612000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.184243209s)

                                                
                                                
-- stdout --
	* [embed-certs-612000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-612000" primary control-plane node in "embed-certs-612000" cluster
	* Restarting existing qemu2 VM for "embed-certs-612000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-612000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:51:00.768022    6603 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:51:00.768145    6603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:00.768148    6603 out.go:304] Setting ErrFile to fd 2...
	I0307 19:51:00.768151    6603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:00.768282    6603 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:51:00.769274    6603 out.go:298] Setting JSON to false
	I0307 19:51:00.785271    6603 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4832,"bootTime":1709865028,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:51:00.785335    6603 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:51:00.790193    6603 out.go:177] * [embed-certs-612000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:51:00.797052    6603 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:51:00.801178    6603 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:51:00.797087    6603 notify.go:220] Checking for updates...
	I0307 19:51:00.808094    6603 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:51:00.811159    6603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:51:00.814244    6603 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:51:00.815711    6603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:51:00.819526    6603 config.go:182] Loaded profile config "embed-certs-612000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:51:00.819771    6603 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:51:00.824206    6603 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:51:00.829163    6603 start.go:297] selected driver: qemu2
	I0307 19:51:00.829169    6603 start.go:901] validating driver "qemu2" against &{Name:embed-certs-612000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:embed-certs-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:51:00.829234    6603 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:51:00.831474    6603 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:51:00.831527    6603 cni.go:84] Creating CNI manager for ""
	I0307 19:51:00.831534    6603 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:51:00.831565    6603 start.go:340] cluster config:
	{Name:embed-certs-612000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-612000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:51:00.835903    6603 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:51:00.843150    6603 out.go:177] * Starting "embed-certs-612000" primary control-plane node in "embed-certs-612000" cluster
	I0307 19:51:00.847175    6603 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:51:00.847191    6603 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:51:00.847205    6603 cache.go:56] Caching tarball of preloaded images
	I0307 19:51:00.847269    6603 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:51:00.847275    6603 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:51:00.847353    6603 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/embed-certs-612000/config.json ...
	I0307 19:51:00.847862    6603 start.go:360] acquireMachinesLock for embed-certs-612000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:51:00.847889    6603 start.go:364] duration metric: took 20.916µs to acquireMachinesLock for "embed-certs-612000"
	I0307 19:51:00.847897    6603 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:51:00.847902    6603 fix.go:54] fixHost starting: 
	I0307 19:51:00.848031    6603 fix.go:112] recreateIfNeeded on embed-certs-612000: state=Stopped err=<nil>
	W0307 19:51:00.848040    6603 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:51:00.852202    6603 out.go:177] * Restarting existing qemu2 VM for "embed-certs-612000" ...
	I0307 19:51:00.860242    6603 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:66:45:16:8d:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2
	I0307 19:51:00.862380    6603 main.go:141] libmachine: STDOUT: 
	I0307 19:51:00.862405    6603 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:51:00.862434    6603 fix.go:56] duration metric: took 14.531083ms for fixHost
	I0307 19:51:00.862439    6603 start.go:83] releasing machines lock for "embed-certs-612000", held for 14.546125ms
	W0307 19:51:00.862446    6603 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:51:00.862496    6603 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:51:00.862502    6603 start.go:728] Will try again in 5 seconds ...
	I0307 19:51:05.864568    6603 start.go:360] acquireMachinesLock for embed-certs-612000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:51:05.864959    6603 start.go:364] duration metric: took 288.916µs to acquireMachinesLock for "embed-certs-612000"
	I0307 19:51:05.865092    6603 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:51:05.865116    6603 fix.go:54] fixHost starting: 
	I0307 19:51:05.865847    6603 fix.go:112] recreateIfNeeded on embed-certs-612000: state=Stopped err=<nil>
	W0307 19:51:05.865876    6603 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:51:05.871484    6603 out.go:177] * Restarting existing qemu2 VM for "embed-certs-612000" ...
	I0307 19:51:05.877494    6603 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:66:45:16:8d:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/embed-certs-612000/disk.qcow2
	I0307 19:51:05.887426    6603 main.go:141] libmachine: STDOUT: 
	I0307 19:51:05.887494    6603 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:51:05.887586    6603 fix.go:56] duration metric: took 22.475833ms for fixHost
	I0307 19:51:05.887603    6603 start.go:83] releasing machines lock for "embed-certs-612000", held for 22.620625ms
	W0307 19:51:05.887807    6603 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-612000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-612000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:51:05.894437    6603 out.go:177] 
	W0307 19:51:05.897458    6603 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:51:05.897485    6603 out.go:239] * 
	* 
	W0307 19:51:05.899857    6603 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:51:05.908456    6603 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-612000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000: exit status 7 (69.395041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-612000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-156000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-156000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (6.480135875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-156000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-156000" primary control-plane node in "default-k8s-diff-port-156000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-156000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-156000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:51:02.711813    6620 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:51:02.711931    6620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:02.711935    6620 out.go:304] Setting ErrFile to fd 2...
	I0307 19:51:02.711937    6620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:02.712058    6620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:51:02.713046    6620 out.go:298] Setting JSON to false
	I0307 19:51:02.729003    6620 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4834,"bootTime":1709865028,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:51:02.729061    6620 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:51:02.733516    6620 out.go:177] * [default-k8s-diff-port-156000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:51:02.740507    6620 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:51:02.744527    6620 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:51:02.740566    6620 notify.go:220] Checking for updates...
	I0307 19:51:02.747529    6620 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:51:02.750450    6620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:51:02.753514    6620 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:51:02.756453    6620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:51:02.759824    6620 config.go:182] Loaded profile config "default-k8s-diff-port-156000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:51:02.760066    6620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:51:02.764471    6620 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:51:02.771447    6620 start.go:297] selected driver: qemu2
	I0307 19:51:02.771455    6620 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-156000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-156000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:51:02.771512    6620 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:51:02.773732    6620 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 19:51:02.773783    6620 cni.go:84] Creating CNI manager for ""
	I0307 19:51:02.773790    6620 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:51:02.773818    6620 start.go:340] cluster config:
	{Name:default-k8s-diff-port-156000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-156000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:51:02.778096    6620 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:51:02.785460    6620 out.go:177] * Starting "default-k8s-diff-port-156000" primary control-plane node in "default-k8s-diff-port-156000" cluster
	I0307 19:51:02.789528    6620 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 19:51:02.789543    6620 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 19:51:02.789558    6620 cache.go:56] Caching tarball of preloaded images
	I0307 19:51:02.789616    6620 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:51:02.789622    6620 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 19:51:02.789705    6620 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/default-k8s-diff-port-156000/config.json ...
	I0307 19:51:02.790191    6620 start.go:360] acquireMachinesLock for default-k8s-diff-port-156000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:51:02.790219    6620 start.go:364] duration metric: took 20.667µs to acquireMachinesLock for "default-k8s-diff-port-156000"
	I0307 19:51:02.790226    6620 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:51:02.790230    6620 fix.go:54] fixHost starting: 
	I0307 19:51:02.790344    6620 fix.go:112] recreateIfNeeded on default-k8s-diff-port-156000: state=Stopped err=<nil>
	W0307 19:51:02.790353    6620 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:51:02.793408    6620 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-156000" ...
	I0307 19:51:02.801343    6620 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:83:7c:c8:92:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2
	I0307 19:51:02.803313    6620 main.go:141] libmachine: STDOUT: 
	I0307 19:51:02.803335    6620 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:51:02.803371    6620 fix.go:56] duration metric: took 13.141875ms for fixHost
	I0307 19:51:02.803377    6620 start.go:83] releasing machines lock for "default-k8s-diff-port-156000", held for 13.155042ms
	W0307 19:51:02.803384    6620 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:51:02.803425    6620 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:51:02.803430    6620 start.go:728] Will try again in 5 seconds ...
	I0307 19:51:07.804641    6620 start.go:360] acquireMachinesLock for default-k8s-diff-port-156000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:51:09.084482    6620 start.go:364] duration metric: took 1.279760708s to acquireMachinesLock for "default-k8s-diff-port-156000"
	I0307 19:51:09.084570    6620 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:51:09.084592    6620 fix.go:54] fixHost starting: 
	I0307 19:51:09.085302    6620 fix.go:112] recreateIfNeeded on default-k8s-diff-port-156000: state=Stopped err=<nil>
	W0307 19:51:09.085331    6620 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:51:09.093940    6620 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-156000" ...
	I0307 19:51:09.106136    6620 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:83:7c:c8:92:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/default-k8s-diff-port-156000/disk.qcow2
	I0307 19:51:09.117726    6620 main.go:141] libmachine: STDOUT: 
	I0307 19:51:09.117801    6620 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:51:09.117907    6620 fix.go:56] duration metric: took 33.318875ms for fixHost
	I0307 19:51:09.117923    6620 start.go:83] releasing machines lock for "default-k8s-diff-port-156000", held for 33.403208ms
	W0307 19:51:09.118114    6620 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-156000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-156000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:51:09.126684    6620 out.go:177] 
	W0307 19:51:09.132139    6620 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:51:09.132175    6620 out.go:239] * 
	* 
	W0307 19:51:09.134380    6620 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:51:09.145857    6620 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-156000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000: exit status 7 (65.490125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-156000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-612000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000: exit status 7 (32.393042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-612000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-612000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-612000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-612000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.630958ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-612000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-612000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000: exit status 7 (31.050666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-612000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-612000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000: exit status 7 (31.084334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-612000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-612000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-612000 --alsologtostderr -v=1: exit status 83 (42.535292ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-612000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-612000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:51:06.191443    6645 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:51:06.191601    6645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:06.191604    6645 out.go:304] Setting ErrFile to fd 2...
	I0307 19:51:06.191607    6645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:06.191729    6645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:51:06.191946    6645 out.go:298] Setting JSON to false
	I0307 19:51:06.191955    6645 mustload.go:65] Loading cluster: embed-certs-612000
	I0307 19:51:06.192141    6645 config.go:182] Loaded profile config "embed-certs-612000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:51:06.196127    6645 out.go:177] * The control-plane node embed-certs-612000 host is not running: state=Stopped
	I0307 19:51:06.199891    6645 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-612000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-612000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000: exit status 7 (30.390791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-612000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000: exit status 7 (30.933084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-612000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-723000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-723000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.981446333s)

                                                
                                                
-- stdout --
	* [newest-cni-723000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-723000" primary control-plane node in "newest-cni-723000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-723000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:51:06.663296    6668 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:51:06.663443    6668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:06.663446    6668 out.go:304] Setting ErrFile to fd 2...
	I0307 19:51:06.663449    6668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:06.663563    6668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:51:06.664606    6668 out.go:298] Setting JSON to false
	I0307 19:51:06.681813    6668 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4838,"bootTime":1709865028,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:51:06.681890    6668 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:51:06.686488    6668 out.go:177] * [newest-cni-723000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:51:06.693632    6668 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:51:06.696545    6668 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:51:06.693732    6668 notify.go:220] Checking for updates...
	I0307 19:51:06.702618    6668 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:51:06.705536    6668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:51:06.708562    6668 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:51:06.711634    6668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:51:06.713394    6668 config.go:182] Loaded profile config "default-k8s-diff-port-156000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:51:06.713459    6668 config.go:182] Loaded profile config "multinode-407000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:51:06.713515    6668 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:51:06.717555    6668 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 19:51:06.724408    6668 start.go:297] selected driver: qemu2
	I0307 19:51:06.724413    6668 start.go:901] validating driver "qemu2" against <nil>
	I0307 19:51:06.724423    6668 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:51:06.726679    6668 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0307 19:51:06.726706    6668 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0307 19:51:06.734589    6668 out.go:177] * Automatically selected the socket_vmnet network
	I0307 19:51:06.736088    6668 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0307 19:51:06.736136    6668 cni.go:84] Creating CNI manager for ""
	I0307 19:51:06.736142    6668 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:51:06.736147    6668 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 19:51:06.736179    6668 start.go:340] cluster config:
	{Name:newest-cni-723000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-723000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:51:06.740665    6668 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:51:06.747600    6668 out.go:177] * Starting "newest-cni-723000" primary control-plane node in "newest-cni-723000" cluster
	I0307 19:51:06.751573    6668 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 19:51:06.751588    6668 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 19:51:06.751598    6668 cache.go:56] Caching tarball of preloaded images
	I0307 19:51:06.751657    6668 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:51:06.751663    6668 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0307 19:51:06.751730    6668 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/newest-cni-723000/config.json ...
	I0307 19:51:06.751741    6668 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/newest-cni-723000/config.json: {Name:mk10bab208e3e992c727e8539fff98ea0c946731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 19:51:06.751941    6668 start.go:360] acquireMachinesLock for newest-cni-723000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:51:06.751968    6668 start.go:364] duration metric: took 22.542µs to acquireMachinesLock for "newest-cni-723000"
	I0307 19:51:06.751979    6668 start.go:93] Provisioning new machine with config: &{Name:newest-cni-723000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-723000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:51:06.752005    6668 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:51:06.759541    6668 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:51:06.774506    6668 start.go:159] libmachine.API.Create for "newest-cni-723000" (driver="qemu2")
	I0307 19:51:06.774532    6668 client.go:168] LocalClient.Create starting
	I0307 19:51:06.774586    6668 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:51:06.774613    6668 main.go:141] libmachine: Decoding PEM data...
	I0307 19:51:06.774621    6668 main.go:141] libmachine: Parsing certificate...
	I0307 19:51:06.774664    6668 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:51:06.774686    6668 main.go:141] libmachine: Decoding PEM data...
	I0307 19:51:06.774697    6668 main.go:141] libmachine: Parsing certificate...
	I0307 19:51:06.775062    6668 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:51:06.914877    6668 main.go:141] libmachine: Creating SSH key...
	I0307 19:51:07.056404    6668 main.go:141] libmachine: Creating Disk image...
	I0307 19:51:07.056413    6668 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:51:07.056611    6668 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2
	I0307 19:51:07.069169    6668 main.go:141] libmachine: STDOUT: 
	I0307 19:51:07.069191    6668 main.go:141] libmachine: STDERR: 
	I0307 19:51:07.069253    6668 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2 +20000M
	I0307 19:51:07.080253    6668 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:51:07.080269    6668 main.go:141] libmachine: STDERR: 
	I0307 19:51:07.080291    6668 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2
	I0307 19:51:07.080297    6668 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:51:07.080326    6668 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:20:53:d3:86:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2
	I0307 19:51:07.081987    6668 main.go:141] libmachine: STDOUT: 
	I0307 19:51:07.082005    6668 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:51:07.082027    6668 client.go:171] duration metric: took 307.498041ms to LocalClient.Create
	I0307 19:51:09.084227    6668 start.go:128] duration metric: took 2.332277625s to createHost
	I0307 19:51:09.084312    6668 start.go:83] releasing machines lock for "newest-cni-723000", held for 2.332430333s
	W0307 19:51:09.084366    6668 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:51:09.100883    6668 out.go:177] * Deleting "newest-cni-723000" in qemu2 ...
	W0307 19:51:09.157836    6668 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:51:09.157883    6668 start.go:728] Will try again in 5 seconds ...
	I0307 19:51:14.158193    6668 start.go:360] acquireMachinesLock for newest-cni-723000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:51:14.158742    6668 start.go:364] duration metric: took 451µs to acquireMachinesLock for "newest-cni-723000"
	I0307 19:51:14.158878    6668 start.go:93] Provisioning new machine with config: &{Name:newest-cni-723000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-723000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 19:51:14.159184    6668 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 19:51:14.164891    6668 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 19:51:14.214305    6668 start.go:159] libmachine.API.Create for "newest-cni-723000" (driver="qemu2")
	I0307 19:51:14.214359    6668 client.go:168] LocalClient.Create starting
	I0307 19:51:14.214472    6668 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/ca.pem
	I0307 19:51:14.214534    6668 main.go:141] libmachine: Decoding PEM data...
	I0307 19:51:14.214551    6668 main.go:141] libmachine: Parsing certificate...
	I0307 19:51:14.214613    6668 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18333-1199/.minikube/certs/cert.pem
	I0307 19:51:14.214656    6668 main.go:141] libmachine: Decoding PEM data...
	I0307 19:51:14.214671    6668 main.go:141] libmachine: Parsing certificate...
	I0307 19:51:14.215162    6668 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 19:51:14.366139    6668 main.go:141] libmachine: Creating SSH key...
	I0307 19:51:14.542932    6668 main.go:141] libmachine: Creating Disk image...
	I0307 19:51:14.542939    6668 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 19:51:14.543154    6668 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2.raw /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2
	I0307 19:51:14.556382    6668 main.go:141] libmachine: STDOUT: 
	I0307 19:51:14.556404    6668 main.go:141] libmachine: STDERR: 
	I0307 19:51:14.556470    6668 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2 +20000M
	I0307 19:51:14.567245    6668 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 19:51:14.567264    6668 main.go:141] libmachine: STDERR: 
	I0307 19:51:14.567274    6668 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2
	I0307 19:51:14.567282    6668 main.go:141] libmachine: Starting QEMU VM...
	I0307 19:51:14.567312    6668 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:82:d6:05:15:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2
	I0307 19:51:14.569035    6668 main.go:141] libmachine: STDOUT: 
	I0307 19:51:14.569051    6668 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:51:14.569064    6668 client.go:171] duration metric: took 354.713125ms to LocalClient.Create
	I0307 19:51:16.571168    6668 start.go:128] duration metric: took 2.412048708s to createHost
	I0307 19:51:16.571215    6668 start.go:83] releasing machines lock for "newest-cni-723000", held for 2.412546458s
	W0307 19:51:16.571651    6668 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-723000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-723000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:51:16.581165    6668 out.go:177] 
	W0307 19:51:16.587417    6668 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:51:16.587441    6668 out.go:239] * 
	* 
	W0307 19:51:16.590239    6668 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:51:16.604117    6668 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-723000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-723000 -n newest-cni-723000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-723000 -n newest-cni-723000: exit status 7 (68.538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-723000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-156000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000: exit status 7 (32.814583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-156000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-156000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-156000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-156000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.733333ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-156000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-156000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000: exit status 7 (30.968166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-156000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-156000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000: exit status 7 (30.661292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-156000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-156000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-156000 --alsologtostderr -v=1: exit status 83 (42.847667ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-156000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-156000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:51:09.422249    6692 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:51:09.422384    6692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:09.422387    6692 out.go:304] Setting ErrFile to fd 2...
	I0307 19:51:09.422390    6692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:09.422525    6692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:51:09.422757    6692 out.go:298] Setting JSON to false
	I0307 19:51:09.422766    6692 mustload.go:65] Loading cluster: default-k8s-diff-port-156000
	I0307 19:51:09.422966    6692 config.go:182] Loaded profile config "default-k8s-diff-port-156000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:51:09.426951    6692 out.go:177] * The control-plane node default-k8s-diff-port-156000 host is not running: state=Stopped
	I0307 19:51:09.431007    6692 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-156000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-156000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000: exit status 7 (30.695417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-156000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000: exit status 7 (30.616958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-156000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-723000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-723000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.185520875s)

                                                
                                                
-- stdout --
	* [newest-cni-723000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-723000" primary control-plane node in "newest-cni-723000" cluster
	* Restarting existing qemu2 VM for "newest-cni-723000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-723000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:51:20.413844    6750 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:51:20.413966    6750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:20.413969    6750 out.go:304] Setting ErrFile to fd 2...
	I0307 19:51:20.413971    6750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:20.414098    6750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:51:20.415087    6750 out.go:298] Setting JSON to false
	I0307 19:51:20.431845    6750 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4852,"bootTime":1709865028,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:51:20.431904    6750 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:51:20.436344    6750 out.go:177] * [newest-cni-723000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:51:20.443464    6750 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:51:20.446392    6750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:51:20.443506    6750 notify.go:220] Checking for updates...
	I0307 19:51:20.450476    6750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:51:20.453497    6750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:51:20.456453    6750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:51:20.459467    6750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:51:20.462784    6750 config.go:182] Loaded profile config "newest-cni-723000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 19:51:20.463054    6750 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:51:20.466418    6750 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:51:20.473468    6750 start.go:297] selected driver: qemu2
	I0307 19:51:20.473475    6750 start.go:901] validating driver "qemu2" against &{Name:newest-cni-723000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-723000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:51:20.473520    6750 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:51:20.475922    6750 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0307 19:51:20.475969    6750 cni.go:84] Creating CNI manager for ""
	I0307 19:51:20.475977    6750 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 19:51:20.476002    6750 start.go:340] cluster config:
	{Name:newest-cni-723000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-723000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:51:20.480397    6750 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 19:51:20.487527    6750 out.go:177] * Starting "newest-cni-723000" primary control-plane node in "newest-cni-723000" cluster
	I0307 19:51:20.491340    6750 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 19:51:20.491355    6750 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 19:51:20.491367    6750 cache.go:56] Caching tarball of preloaded images
	I0307 19:51:20.491422    6750 preload.go:173] Found /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 19:51:20.491428    6750 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0307 19:51:20.491501    6750 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/newest-cni-723000/config.json ...
	I0307 19:51:20.491969    6750 start.go:360] acquireMachinesLock for newest-cni-723000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:51:20.491995    6750 start.go:364] duration metric: took 19.416µs to acquireMachinesLock for "newest-cni-723000"
	I0307 19:51:20.492002    6750 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:51:20.492008    6750 fix.go:54] fixHost starting: 
	I0307 19:51:20.492121    6750 fix.go:112] recreateIfNeeded on newest-cni-723000: state=Stopped err=<nil>
	W0307 19:51:20.492129    6750 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:51:20.495634    6750 out.go:177] * Restarting existing qemu2 VM for "newest-cni-723000" ...
	I0307 19:51:20.503434    6750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:82:d6:05:15:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2
	I0307 19:51:20.505350    6750 main.go:141] libmachine: STDOUT: 
	I0307 19:51:20.505367    6750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:51:20.505402    6750 fix.go:56] duration metric: took 13.39375ms for fixHost
	I0307 19:51:20.505408    6750 start.go:83] releasing machines lock for "newest-cni-723000", held for 13.410083ms
	W0307 19:51:20.505414    6750 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:51:20.505454    6750 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:51:20.505459    6750 start.go:728] Will try again in 5 seconds ...
	I0307 19:51:25.507384    6750 start.go:360] acquireMachinesLock for newest-cni-723000: {Name:mk5e2481692dff4aaf926de018f4e24a6ac73950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 19:51:25.507694    6750 start.go:364] duration metric: took 253.084µs to acquireMachinesLock for "newest-cni-723000"
	I0307 19:51:25.507815    6750 start.go:96] Skipping create...Using existing machine configuration
	I0307 19:51:25.507835    6750 fix.go:54] fixHost starting: 
	I0307 19:51:25.508540    6750 fix.go:112] recreateIfNeeded on newest-cni-723000: state=Stopped err=<nil>
	W0307 19:51:25.508566    6750 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 19:51:25.517966    6750 out.go:177] * Restarting existing qemu2 VM for "newest-cni-723000" ...
	I0307 19:51:25.522164    6750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:82:d6:05:15:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/newest-cni-723000/disk.qcow2
	I0307 19:51:25.532024    6750 main.go:141] libmachine: STDOUT: 
	I0307 19:51:25.532107    6750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 19:51:25.532185    6750 fix.go:56] duration metric: took 24.352ms for fixHost
	I0307 19:51:25.532211    6750 start.go:83] releasing machines lock for "newest-cni-723000", held for 24.49575ms
	W0307 19:51:25.532386    6750 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-723000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-723000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 19:51:25.539973    6750 out.go:177] 
	W0307 19:51:25.544022    6750 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 19:51:25.544047    6750 out.go:239] * 
	* 
	W0307 19:51:25.546597    6750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:51:25.555034    6750 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-723000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-723000 -n newest-cni-723000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-723000 -n newest-cni-723000: exit status 7 (70.396208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-723000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-723000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-723000 -n newest-cni-723000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-723000 -n newest-cni-723000: exit status 7 (31.873208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-723000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-723000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-723000 --alsologtostderr -v=1: exit status 83 (41.266417ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-723000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-723000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:51:25.746012    6764 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:51:25.746148    6764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:25.746151    6764 out.go:304] Setting ErrFile to fd 2...
	I0307 19:51:25.746153    6764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:51:25.746269    6764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:51:25.746481    6764 out.go:298] Setting JSON to false
	I0307 19:51:25.746489    6764 mustload.go:65] Loading cluster: newest-cni-723000
	I0307 19:51:25.746683    6764 config.go:182] Loaded profile config "newest-cni-723000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 19:51:25.751021    6764 out.go:177] * The control-plane node newest-cni-723000 host is not running: state=Stopped
	I0307 19:51:25.754981    6764 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-723000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-723000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-723000 -n newest-cni-723000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-723000 -n newest-cni-723000: exit status 7 (32.074459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-723000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-723000 -n newest-cni-723000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-723000 -n newest-cni-723000: exit status 7 (31.809334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-723000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (160/281)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.28.4/json-events 30.72
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.24
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.29.0-rc.2/json-events 24.77
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.24
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.41
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 205.74
38 TestAddons/parallel/Registry 18.08
40 TestAddons/parallel/InspektorGadget 10.22
41 TestAddons/parallel/MetricsServer 5.25
44 TestAddons/parallel/CSI 61.6
45 TestAddons/parallel/Headlamp 12.56
46 TestAddons/parallel/CloudSpanner 5.18
47 TestAddons/parallel/LocalPath 51.79
48 TestAddons/parallel/NvidiaDevicePlugin 5.16
49 TestAddons/parallel/Yakd 5
52 TestAddons/serial/GCPAuth/Namespaces 0.07
53 TestAddons/StoppedEnableDisable 12.4
61 TestHyperKitDriverInstallOrUpdate 9.42
64 TestErrorSpam/setup 31.52
65 TestErrorSpam/start 0.35
66 TestErrorSpam/status 0.24
67 TestErrorSpam/pause 0.71
68 TestErrorSpam/unpause 0.65
69 TestErrorSpam/stop 64.27
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 75.88
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 37.82
76 TestFunctional/serial/KubeContext 0.03
77 TestFunctional/serial/KubectlGetPods 0.05
80 TestFunctional/serial/CacheCmd/cache/add_remote 9.14
81 TestFunctional/serial/CacheCmd/cache/add_local 1.2
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
83 TestFunctional/serial/CacheCmd/cache/list 0.04
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.2
86 TestFunctional/serial/CacheCmd/cache/delete 0.08
87 TestFunctional/serial/MinikubeKubectlCmd 0.53
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.65
89 TestFunctional/serial/ExtraConfig 35.82
90 TestFunctional/serial/ComponentHealth 0.04
91 TestFunctional/serial/LogsCmd 0.66
92 TestFunctional/serial/LogsFileCmd 0.69
93 TestFunctional/serial/InvalidService 4.01
95 TestFunctional/parallel/ConfigCmd 0.24
96 TestFunctional/parallel/DashboardCmd 9.07
97 TestFunctional/parallel/DryRun 0.26
98 TestFunctional/parallel/InternationalLanguage 0.15
99 TestFunctional/parallel/StatusCmd 0.29
104 TestFunctional/parallel/AddonsCmd 0.12
105 TestFunctional/parallel/PersistentVolumeClaim 29.78
107 TestFunctional/parallel/SSHCmd 0.14
108 TestFunctional/parallel/CpCmd 0.47
110 TestFunctional/parallel/FileSync 0.07
111 TestFunctional/parallel/CertSync 0.46
115 TestFunctional/parallel/NodeLabels 0.05
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
119 TestFunctional/parallel/License 1.23
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.1
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
131 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
132 TestFunctional/parallel/ServiceCmd/List 0.29
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
135 TestFunctional/parallel/ServiceCmd/Format 0.11
136 TestFunctional/parallel/ServiceCmd/URL 0.11
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
138 TestFunctional/parallel/ProfileCmd/profile_list 0.15
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
140 TestFunctional/parallel/MountCmd/any-port 9.68
141 TestFunctional/parallel/MountCmd/specific-port 1.24
142 TestFunctional/parallel/MountCmd/VerifyCleanup 0.79
143 TestFunctional/parallel/Version/short 0.04
144 TestFunctional/parallel/Version/components 0.17
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
148 TestFunctional/parallel/ImageCommands/ImageListYaml 1.49
149 TestFunctional/parallel/ImageCommands/ImageBuild 6
150 TestFunctional/parallel/ImageCommands/Setup 5.35
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.15
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.54
153 TestFunctional/parallel/DockerEnv/bash 0.38
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.34
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
162 TestFunctional/delete_addon-resizer_images 0.11
163 TestFunctional/delete_my-image_image 0.04
164 TestFunctional/delete_minikube_cached_images 0.04
168 TestMutliControlPlane/serial/StartCluster 247.96
169 TestMutliControlPlane/serial/DeployApp 8.78
170 TestMutliControlPlane/serial/PingHostFromPods 0.79
171 TestMutliControlPlane/serial/AddWorkerNode 52.3
172 TestMutliControlPlane/serial/NodeLabels 0.12
173 TestMutliControlPlane/serial/HAppyAfterClusterStart 2.41
174 TestMutliControlPlane/serial/CopyFile 4.47
178 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 80.41
186 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.07
193 TestJSONOutput/start/Audit 0
195 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Audit 0
201 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Audit 0
207 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/stop/Command 3.14
211 TestJSONOutput/stop/Audit 0
213 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
215 TestErrorJSONOutput 0.33
220 TestMainNoArgs 0.04
267 TestStoppedBinaryUpgrade/Setup 5.09
279 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
284 TestNoKubernetes/serial/ProfileList 31.47
285 TestNoKubernetes/serial/Stop 3.32
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
295 TestStoppedBinaryUpgrade/MinikubeLogs 0.69
304 TestStartStop/group/old-k8s-version/serial/Stop 3.6
307 TestStartStop/group/no-preload/serial/Stop 3.78
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
326 TestStartStop/group/embed-certs/serial/Stop 3.2
329 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.1
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.91
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
346 TestStartStop/group/newest-cni/serial/Stop 3.51
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-410000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-410000: exit status 85 (98.497125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-410000 | jenkins | v1.32.0 | 07 Mar 24 18:55 PST |          |
	|         | -p download-only-410000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 18:55:38
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:55:38.899590    1622 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:55:38.899725    1622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:55:38.899728    1622 out.go:304] Setting ErrFile to fd 2...
	I0307 18:55:38.899731    1622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:55:38.899853    1622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	W0307 18:55:38.899933    1622 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18333-1199/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18333-1199/.minikube/config/config.json: no such file or directory
	I0307 18:55:38.901171    1622 out.go:298] Setting JSON to true
	I0307 18:55:38.918688    1622 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1510,"bootTime":1709865028,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 18:55:38.918756    1622 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 18:55:38.925119    1622 out.go:97] [download-only-410000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 18:55:38.928093    1622 out.go:169] MINIKUBE_LOCATION=18333
	I0307 18:55:38.925256    1622 notify.go:220] Checking for updates...
	W0307 18:55:38.925279    1622 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball: no such file or directory
	I0307 18:55:38.936073    1622 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 18:55:38.937668    1622 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 18:55:38.941129    1622 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:55:38.944098    1622 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	W0307 18:55:38.950041    1622 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 18:55:38.950269    1622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:55:38.955061    1622 out.go:97] Using the qemu2 driver based on user configuration
	I0307 18:55:38.955081    1622 start.go:297] selected driver: qemu2
	I0307 18:55:38.955096    1622 start.go:901] validating driver "qemu2" against <nil>
	I0307 18:55:38.955167    1622 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 18:55:38.959067    1622 out.go:169] Automatically selected the socket_vmnet network
	I0307 18:55:38.964613    1622 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0307 18:55:38.964742    1622 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 18:55:38.964842    1622 cni.go:84] Creating CNI manager for ""
	I0307 18:55:38.964859    1622 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 18:55:38.964907    1622 start.go:340] cluster config:
	{Name:download-only-410000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-410000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:55:38.970526    1622 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:55:38.975119    1622 out.go:97] Downloading VM boot image ...
	I0307 18:55:38.975172    1622 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso
	I0307 18:55:57.562837    1622 out.go:97] Starting "download-only-410000" primary control-plane node in "download-only-410000" cluster
	I0307 18:55:57.562884    1622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 18:55:57.835782    1622 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 18:55:57.835874    1622 cache.go:56] Caching tarball of preloaded images
	I0307 18:55:57.836577    1622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 18:55:57.842564    1622 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0307 18:55:57.842588    1622 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 18:55:58.431610    1622 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 18:56:19.086670    1622 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 18:56:19.086835    1622 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 18:56:19.788399    1622 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 18:56:19.788588    1622 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/download-only-410000/config.json ...
	I0307 18:56:19.788604    1622 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/download-only-410000/config.json: {Name:mke5b03ac0a37a6ec34b8b6fd54e5c17259e0351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:56:19.788848    1622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 18:56:19.789024    1622 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0307 18:56:20.651764    1622 out.go:169] 
	W0307 18:56:20.656833    1622 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18333-1199/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106b430a0 0x106b430a0 0x106b430a0 0x106b430a0 0x106b430a0 0x106b430a0 0x106b430a0] Decompressors:map[bz2:0x140006d6910 gz:0x140006d6918 tar:0x140006d68c0 tar.bz2:0x140006d68d0 tar.gz:0x140006d68e0 tar.xz:0x140006d68f0 tar.zst:0x140006d6900 tbz2:0x140006d68d0 tgz:0x140006d68e0 txz:0x140006d68f0 tzst:0x140006d6900 xz:0x140006d6920 zip:0x140006d6930 zst:0x140006d6928] Getters:map[file:0x1400211e570 http:0x140009862d0 https:0x14000986320] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0307 18:56:20.656861    1622 out_reason.go:110] 
	W0307 18:56:20.664788    1622 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 18:56:20.668745    1622 out.go:169] 
	
	
	* The control-plane node download-only-410000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-410000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-410000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (30.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-277000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-277000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (30.716753125s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (30.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-277000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-277000: exit status 85 (84.196541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-410000 | jenkins | v1.32.0 | 07 Mar 24 18:55 PST |                     |
	|         | -p download-only-410000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 07 Mar 24 18:56 PST | 07 Mar 24 18:56 PST |
	| delete  | -p download-only-410000        | download-only-410000 | jenkins | v1.32.0 | 07 Mar 24 18:56 PST | 07 Mar 24 18:56 PST |
	| start   | -o=json --download-only        | download-only-277000 | jenkins | v1.32.0 | 07 Mar 24 18:56 PST |                     |
	|         | -p download-only-277000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 18:56:21
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:56:21.349960    1660 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:56:21.350084    1660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:56:21.350087    1660 out.go:304] Setting ErrFile to fd 2...
	I0307 18:56:21.350089    1660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:56:21.350209    1660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 18:56:21.351249    1660 out.go:298] Setting JSON to true
	I0307 18:56:21.367284    1660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1553,"bootTime":1709865028,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 18:56:21.367346    1660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 18:56:21.372258    1660 out.go:97] [download-only-277000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 18:56:21.376362    1660 out.go:169] MINIKUBE_LOCATION=18333
	I0307 18:56:21.372328    1660 notify.go:220] Checking for updates...
	I0307 18:56:21.383316    1660 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 18:56:21.386336    1660 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 18:56:21.389381    1660 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:56:21.392339    1660 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	W0307 18:56:21.398385    1660 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 18:56:21.398586    1660 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:56:21.400165    1660 out.go:97] Using the qemu2 driver based on user configuration
	I0307 18:56:21.400172    1660 start.go:297] selected driver: qemu2
	I0307 18:56:21.400175    1660 start.go:901] validating driver "qemu2" against <nil>
	I0307 18:56:21.400214    1660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 18:56:21.403314    1660 out.go:169] Automatically selected the socket_vmnet network
	I0307 18:56:21.408469    1660 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0307 18:56:21.408565    1660 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 18:56:21.408602    1660 cni.go:84] Creating CNI manager for ""
	I0307 18:56:21.408612    1660 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 18:56:21.408620    1660 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 18:56:21.408657    1660 start.go:340] cluster config:
	{Name:download-only-277000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-277000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:56:21.412989    1660 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:56:21.416447    1660 out.go:97] Starting "download-only-277000" primary control-plane node in "download-only-277000" cluster
	I0307 18:56:21.416458    1660 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 18:56:22.092123    1660 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 18:56:22.092201    1660 cache.go:56] Caching tarball of preloaded images
	I0307 18:56:22.092891    1660 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 18:56:22.096937    1660 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0307 18:56:22.096966    1660 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0307 18:56:22.698839    1660 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 18:56:40.067086    1660 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0307 18:56:40.067256    1660 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0307 18:56:40.650509    1660 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 18:56:40.650711    1660 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/download-only-277000/config.json ...
	I0307 18:56:40.650727    1660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/download-only-277000/config.json: {Name:mk2385546d52692d95153d5a538b3e57d8dab2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:56:40.650980    1660 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 18:56:40.651090    1660 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/darwin/arm64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-277000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-277000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-277000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (24.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-878000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-878000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 : (24.76521525s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (24.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-878000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-878000: exit status 85 (79.271542ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-410000 | jenkins | v1.32.0 | 07 Mar 24 18:55 PST |                     |
	|         | -p download-only-410000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 18:56 PST | 07 Mar 24 18:56 PST |
	| delete  | -p download-only-410000           | download-only-410000 | jenkins | v1.32.0 | 07 Mar 24 18:56 PST | 07 Mar 24 18:56 PST |
	| start   | -o=json --download-only           | download-only-277000 | jenkins | v1.32.0 | 07 Mar 24 18:56 PST |                     |
	|         | -p download-only-277000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 18:56 PST | 07 Mar 24 18:56 PST |
	| delete  | -p download-only-277000           | download-only-277000 | jenkins | v1.32.0 | 07 Mar 24 18:56 PST | 07 Mar 24 18:56 PST |
	| start   | -o=json --download-only           | download-only-878000 | jenkins | v1.32.0 | 07 Mar 24 18:56 PST |                     |
	|         | -p download-only-878000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 18:56:52
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:56:52.620034    1699 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:56:52.620166    1699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:56:52.620170    1699 out.go:304] Setting ErrFile to fd 2...
	I0307 18:56:52.620172    1699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:56:52.620306    1699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 18:56:52.621337    1699 out.go:298] Setting JSON to true
	I0307 18:56:52.637376    1699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1584,"bootTime":1709865028,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 18:56:52.637442    1699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 18:56:52.641944    1699 out.go:97] [download-only-878000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 18:56:52.645896    1699 out.go:169] MINIKUBE_LOCATION=18333
	I0307 18:56:52.642059    1699 notify.go:220] Checking for updates...
	I0307 18:56:52.653923    1699 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 18:56:52.655550    1699 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 18:56:52.658915    1699 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:56:52.661954    1699 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	W0307 18:56:52.667871    1699 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 18:56:52.668023    1699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:56:52.670896    1699 out.go:97] Using the qemu2 driver based on user configuration
	I0307 18:56:52.670904    1699 start.go:297] selected driver: qemu2
	I0307 18:56:52.670908    1699 start.go:901] validating driver "qemu2" against <nil>
	I0307 18:56:52.670960    1699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 18:56:52.673924    1699 out.go:169] Automatically selected the socket_vmnet network
	I0307 18:56:52.679011    1699 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0307 18:56:52.679113    1699 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 18:56:52.679146    1699 cni.go:84] Creating CNI manager for ""
	I0307 18:56:52.679155    1699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 18:56:52.679163    1699 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 18:56:52.679202    1699 start.go:340] cluster config:
	{Name:download-only-878000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:56:52.683520    1699 iso.go:125] acquiring lock: {Name:mkcf9ccddf220024123985dc20153afc11a2860b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:56:52.686912    1699 out.go:97] Starting "download-only-878000" primary control-plane node in "download-only-878000" cluster
	I0307 18:56:52.686921    1699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 18:56:53.824640    1699 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 18:56:53.824710    1699 cache.go:56] Caching tarball of preloaded images
	I0307 18:56:53.825442    1699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 18:56:53.830969    1699 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0307 18:56:53.830999    1699 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0307 18:56:54.418538    1699 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 18:57:10.987660    1699 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0307 18:57:10.987821    1699 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0307 18:57:11.545152    1699 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0307 18:57:11.545351    1699 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/download-only-878000/config.json ...
	I0307 18:57:11.545368    1699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/download-only-878000/config.json: {Name:mkcbbe9174780fb7212439ffbde9d429518a3d5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:57:11.545607    1699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 18:57:11.545730    1699 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18333-1199/.minikube/cache/darwin/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-878000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-878000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-878000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.41s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-653000 --alsologtostderr --binary-mirror http://127.0.0.1:49331 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-653000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-653000
--- PASS: TestBinaryMirror (0.41s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-935000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-935000: exit status 85 (57.411458ms)

                                                
                                                
-- stdout --
	* Profile "addons-935000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-935000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-935000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-935000: exit status 85 (61.347625ms)

                                                
                                                
-- stdout --
	* Profile "addons-935000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-935000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (205.74s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-935000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-darwin-arm64 start -p addons-935000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m25.735294667s)
--- PASS: TestAddons/Setup (205.74s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 6.883458ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-c6628" [991b6c5f-be42-4c1c-8513-cced7da00d13] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004457875s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4g8rw" [a32e6832-a04d-4fcc-91b1-07b60c6e543c] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004276791s
addons_test.go:340: (dbg) Run:  kubectl --context addons-935000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-935000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-935000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.759019667s)
addons_test.go:359: (dbg) Run:  out/minikube-darwin-arm64 -p addons-935000 ip
2024/03/07 19:01:02 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-darwin-arm64 -p addons-935000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.08s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-g7hqq" [d446fba7-9b2f-48e7-85f4-eb32a50eeaa0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.002546542s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-935000
addons_test.go:841: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-935000: (5.220292667s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.089625ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-758pv" [2b344e4d-ec86-4cc1-8e72-496ea70077fb] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004351958s
addons_test.go:415: (dbg) Run:  kubectl --context addons-935000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-arm64 -p addons-935000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 7.387459ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-935000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-935000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [70ef7010-8fe8-48d7-9ca7-ad981fc74aaf] Pending
helpers_test.go:344: "task-pv-pod" [70ef7010-8fe8-48d7-9ca7-ad981fc74aaf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [70ef7010-8fe8-48d7-9ca7-ad981fc74aaf] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003873958s
addons_test.go:584: (dbg) Run:  kubectl --context addons-935000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-935000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-935000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-935000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-935000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-935000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-935000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c8be9d21-35ba-4b5a-9019-0b5f00a14e39] Pending
helpers_test.go:344: "task-pv-pod-restore" [c8be9d21-35ba-4b5a-9019-0b5f00a14e39] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c8be9d21-35ba-4b5a-9019-0b5f00a14e39] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002066541s
addons_test.go:626: (dbg) Run:  kubectl --context addons-935000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-935000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-935000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-arm64 -p addons-935000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-arm64 -p addons-935000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.08621125s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-arm64 -p addons-935000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (61.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-935000 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-p7n25" [1ea50bab-24dd-4e60-9fce-76c401a0e5f3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-p7n25" [1ea50bab-24dd-4e60-9fce-76c401a0e5f3] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003919792s
--- PASS: TestAddons/parallel/Headlamp (12.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-sztcq" [ec434fcc-69b8-43ed-8be3-9029e162116e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003736417s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-935000
--- PASS: TestAddons/parallel/CloudSpanner (5.18s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.79s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-935000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-935000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c9324d48-8bde-4dbc-9ec3-7db562bcc2a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c9324d48-8bde-4dbc-9ec3-7db562bcc2a4] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c9324d48-8bde-4dbc-9ec3-7db562bcc2a4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004062625s
addons_test.go:891: (dbg) Run:  kubectl --context addons-935000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-arm64 -p addons-935000 ssh "cat /opt/local-path-provisioner/pvc-3f1ef09e-da34-419e-8eee-222697714314_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-935000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-935000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-arm64 -p addons-935000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-arm64 -p addons-935000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.330081541s)
--- PASS: TestAddons/parallel/LocalPath (51.79s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kvqmd" [80c34cce-9d7a-45ca-a749-4d2c17971304] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004622s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-935000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-przgj" [c7143073-10df-4263-8203-c05582e3c537] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.002649292s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-935000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-935000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-935000
addons_test.go:172: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-935000: (12.209603583s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-935000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-935000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-935000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.42s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.42s)

                                                
                                    
x
+
TestErrorSpam/setup (31.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-282000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-282000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 --driver=qemu2 : (31.520419042s)
--- PASS: TestErrorSpam/setup (31.52s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 pause
--- PASS: TestErrorSpam/pause (0.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 unpause
--- PASS: TestErrorSpam/unpause (0.65s)

                                                
                                    
x
+
TestErrorSpam/stop (64.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 stop: (12.205552583s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 stop: (26.028223625s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-282000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-282000 stop: (26.031836583s)
--- PASS: TestErrorSpam/stop (64.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18333-1199/.minikube/files/etc/test/nested/copy/1620/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-323000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0307 19:05:44.787098    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:05:44.794038    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:05:44.806086    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:05:44.826248    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:05:44.868341    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:05:44.948449    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:05:45.110530    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:05:45.432634    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:05:46.074803    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:05:47.356880    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:05:49.918901    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:05:55.040402    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-323000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m15.876310833s)
--- PASS: TestFunctional/serial/StartWithProxy (75.88s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.82s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-323000 --alsologtostderr -v=8
E0307 19:06:05.280908    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:06:25.762493    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-323000 --alsologtostderr -v=8: (37.82124725s)
functional_test.go:659: soft start took 37.821658833s for "functional-323000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.82s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-323000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-323000 cache add registry.k8s.io/pause:3.1: (3.541482667s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-323000 cache add registry.k8s.io/pause:3.3: (3.329872959s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-323000 cache add registry.k8s.io/pause:latest: (2.264849125s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-323000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1738853332/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 cache add minikube-local-cache-test:functional-323000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 cache delete minikube-local-cache-test:functional-323000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-323000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-323000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (77.243083ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-323000 cache reload: (1.953758875s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 kubectl -- --context functional-323000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-323000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-323000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0307 19:07:06.723551    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-323000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.82296925s)
functional_test.go:757: restart took 35.823088041s for "functional-323000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-323000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2195558844/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-323000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-323000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-323000: exit status 115 (110.941833ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32442 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-323000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-323000 config get cpus: exit status 14 (32.279083ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-323000 config get cpus: exit status 14 (32.856209ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-323000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-323000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2545: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-323000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-323000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (149.735667ms)

                                                
                                                
-- stdout --
	* [functional-323000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:08:27.137164    2527 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:08:27.137630    2527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:08:27.137633    2527 out.go:304] Setting ErrFile to fd 2...
	I0307 19:08:27.137636    2527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:08:27.137780    2527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:08:27.141401    2527 out.go:298] Setting JSON to false
	I0307 19:08:27.158494    2527 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2279,"bootTime":1709865028,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:08:27.158547    2527 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:08:27.165078    2527 out.go:177] * [functional-323000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 19:08:27.172131    2527 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:08:27.172199    2527 notify.go:220] Checking for updates...
	I0307 19:08:27.179132    2527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:08:27.186036    2527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:08:27.192923    2527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:08:27.197046    2527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:08:27.209043    2527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:08:27.213337    2527 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:08:27.213599    2527 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:08:27.227084    2527 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 19:08:27.230997    2527 start.go:297] selected driver: qemu2
	I0307 19:08:27.231006    2527 start.go:901] validating driver "qemu2" against &{Name:functional-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:08:27.231080    2527 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:08:27.237041    2527 out.go:177] 
	W0307 19:08:27.240077    2527 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0307 19:08:27.244009    2527 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-323000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-323000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-323000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (148.730125ms)

                                                
                                                
-- stdout --
	* [functional-323000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:08:27.102421    2526 out.go:291] Setting OutFile to fd 1 ...
	I0307 19:08:27.102542    2526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:08:27.102546    2526 out.go:304] Setting ErrFile to fd 2...
	I0307 19:08:27.102549    2526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 19:08:27.102680    2526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
	I0307 19:08:27.104758    2526 out.go:298] Setting JSON to false
	I0307 19:08:27.124707    2526 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2279,"bootTime":1709865028,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 19:08:27.124838    2526 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 19:08:27.129077    2526 out.go:177] * [functional-323000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0307 19:08:27.141050    2526 out.go:177]   - MINIKUBE_LOCATION=18333
	I0307 19:08:27.137184    2526 notify.go:220] Checking for updates...
	I0307 19:08:27.147058    2526 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	I0307 19:08:27.151073    2526 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 19:08:27.154042    2526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:08:27.158044    2526 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	I0307 19:08:27.165073    2526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:08:27.172473    2526 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 19:08:27.172754    2526 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 19:08:27.179119    2526 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0307 19:08:27.186017    2526 start.go:297] selected driver: qemu2
	I0307 19:08:27.186021    2526 start.go:901] validating driver "qemu2" against &{Name:functional-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 19:08:27.186064    2526 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:08:27.192926    2526 out.go:177] 
	W0307 19:08:27.197056    2526 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0307 19:08:27.209066    2526 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9debf6ba-7574-48c4-8249-f238fb1a8a0a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003844292s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-323000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-323000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-323000 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-323000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-323000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b5757e9d-b44f-470c-b87d-a30d0f2a6ed1] Pending
helpers_test.go:344: "sp-pod" [b5757e9d-b44f-470c-b87d-a30d0f2a6ed1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b5757e9d-b44f-470c-b87d-a30d0f2a6ed1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004006459s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-323000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-323000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-323000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a981f317-f71f-4a73-8f4b-9df858ff983a] Pending
helpers_test.go:344: "sp-pod" [a981f317-f71f-4a73-8f4b-9df858ff983a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a981f317-f71f-4a73-8f4b-9df858ff983a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004012834s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-323000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.78s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh -n functional-323000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 cp functional-323000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3540685059/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh -n functional-323000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh -n functional-323000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1620/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "sudo cat /etc/test/nested/copy/1620/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1620.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "sudo cat /etc/ssl/certs/1620.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1620.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "sudo cat /usr/share/ca-certificates/1620.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/16202.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "sudo cat /etc/ssl/certs/16202.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/16202.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "sudo cat /usr/share/ca-certificates/16202.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-323000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "sudo systemctl is-active crio"
E0307 19:08:28.643470    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-323000 ssh "sudo systemctl is-active crio": exit status 1 (68.257792ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.2343745s)
--- PASS: TestFunctional/parallel/License (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-323000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-323000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-323000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-323000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2355: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-323000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-323000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [18dfe547-1bff-4c62-b684-87dc42a5ed16] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [18dfe547-1bff-4c62-b684-87dc42a5ed16] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003918s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-323000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.151.122 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-323000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-323000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-323000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-bn22b" [2e6839ce-708d-480c-94b5-7e20aafbbe1d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-bn22b" [2e6839ce-708d-480c-94b5-7e20aafbbe1d] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003803458s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 service list -o json
functional_test.go:1490: Took "284.738541ms" to run "out/minikube-darwin-arm64 -p functional-323000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:30556
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:30556
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "115.32075ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "37.798542ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "115.419ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "37.802417ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-323000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3008229176/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709867295082153000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3008229176/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709867295082153000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3008229176/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709867295082153000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3008229176/001/test-1709867295082153000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Done: out/minikube-darwin-arm64 -p functional-323000 ssh "findmnt -T /mount-9p | grep 9p": (1.088451208s)
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar  8 03:08 created-by-test
-rw-r--r-- 1 docker docker 24 Mar  8 03:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar  8 03:08 test-1709867295082153000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh cat /mount-9p/test-1709867295082153000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-323000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [29e7340c-6b6e-40bc-9ae0-e7c171936e97] Pending
helpers_test.go:344: "busybox-mount" [29e7340c-6b6e-40bc-9ae0-e7c171936e97] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [29e7340c-6b6e-40bc-9ae0-e7c171936e97] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [29e7340c-6b6e-40bc-9ae0-e7c171936e97] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004940417s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-323000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-323000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3008229176/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-323000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port559124222/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-323000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.24575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-323000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port559124222/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-323000 ssh "sudo umount -f /mount-9p": exit status 1 (66.149917ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-323000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-323000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port559124222/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-323000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2791866866/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-323000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2791866866/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-323000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2791866866/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-323000 ssh "findmnt -T" /mount1: exit status 1 (93.857375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-323000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-323000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2791866866/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-323000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2791866866/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-323000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2791866866/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-323000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-323000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-323000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-323000 image ls --format short --alsologtostderr:
I0307 19:08:46.970342    2691 out.go:291] Setting OutFile to fd 1 ...
I0307 19:08:46.970514    2691 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:08:46.970518    2691 out.go:304] Setting ErrFile to fd 2...
I0307 19:08:46.970521    2691 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:08:46.970649    2691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
I0307 19:08:46.971062    2691 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 19:08:46.971127    2691 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 19:08:46.972147    2691 ssh_runner.go:195] Run: systemctl --version
I0307 19:08:46.972157    2691 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/functional-323000/id_rsa Username:docker}
I0307 19:08:47.001251    2691 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-323000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
| docker.io/library/nginx                     | latest            | 760b7cbba31e1 | 192MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/library/minikube-local-cache-test | functional-323000 | 115e10bcb7c30 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/nginx                     | alpine            | be5e6f23a9904 | 43.6MB |
| gcr.io/google-containers/addon-resizer      | functional-323000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-323000 image ls --format table --alsologtostderr:
I0307 19:08:48.527496    2702 out.go:291] Setting OutFile to fd 1 ...
I0307 19:08:48.527657    2702 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:08:48.527660    2702 out.go:304] Setting ErrFile to fd 2...
I0307 19:08:48.527662    2702 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:08:48.527788    2702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
I0307 19:08:48.528210    2702 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 19:08:48.528269    2702 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 19:08:48.529209    2702 ssh_runner.go:195] Run: systemctl --version
I0307 19:08:48.529223    2702 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/functional-323000/id_rsa Username:docker}
I0307 19:08:48.558022    2702 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-323000 image ls --format json --alsologtostderr:
[{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43600000"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c74841
9a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"115e10bcb7c3097ef4d8e6e1472d9157f9f2933f5676e0bd32d25c17134d12e2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-323000"],"size":"30"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54"
,"repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-323000"],"size":"32900000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pau
se:3.9"],"size":"514000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-323000 image ls --format json --alsologtostderr:
I0307 19:08:48.450215    2700 out.go:291] Setting OutFile to fd 1 ...
I0307 19:08:48.450363    2700 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:08:48.450367    2700 out.go:304] Setting ErrFile to fd 2...
I0307 19:08:48.450369    2700 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:08:48.450500    2700 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
I0307 19:08:48.450946    2700 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 19:08:48.451002    2700 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 19:08:48.452039    2700 ssh_runner.go:195] Run: systemctl --version
I0307 19:08:48.452049    2700 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/functional-323000/id_rsa Username:docker}
I0307 19:08:48.481398    2700 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-darwin-arm64 -p functional-323000 image ls --format yaml --alsologtostderr: (1.486582167s)
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-323000 image ls --format yaml --alsologtostderr:
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 115e10bcb7c3097ef4d8e6e1472d9157f9f2933f5676e0bd32d25c17134d12e2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-323000
size: "30"
- id: 760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-323000
size: "32900000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43600000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-323000 image ls --format yaml --alsologtostderr:
I0307 19:08:46.970361    2692 out.go:291] Setting OutFile to fd 1 ...
I0307 19:08:46.970501    2692 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:08:46.970504    2692 out.go:304] Setting ErrFile to fd 2...
I0307 19:08:46.970506    2692 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:08:46.970630    2692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
I0307 19:08:46.971092    2692 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 19:08:46.971150    2692 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 19:08:46.971414    2692 retry.go:31] will retry after 1.392084868s: connect: dial unix /Users/jenkins/minikube-integration/18333-1199/.minikube/machines/functional-323000/monitor: connect: connection refused
I0307 19:08:48.366584    2692 ssh_runner.go:195] Run: systemctl --version
I0307 19:08:48.366606    2692 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/functional-323000/id_rsa Username:docker}
I0307 19:08:48.396174    2692 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-323000 ssh pgrep buildkitd: exit status 1 (64.5495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image build -t localhost/my-image:functional-323000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-323000 image build -t localhost/my-image:functional-323000 testdata/build --alsologtostderr: (5.86275075s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-323000 image build -t localhost/my-image:functional-323000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in ab92dab2f975
Removing intermediate container ab92dab2f975
---> cf7da39018dd
Step 3/3 : ADD content.txt /
---> 55199dd437c1
Successfully built 55199dd437c1
Successfully tagged localhost/my-image:functional-323000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-323000 image build -t localhost/my-image:functional-323000 testdata/build --alsologtostderr:
I0307 19:08:47.110286    2697 out.go:291] Setting OutFile to fd 1 ...
I0307 19:08:47.110519    2697 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:08:47.110522    2697 out.go:304] Setting ErrFile to fd 2...
I0307 19:08:47.110525    2697 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 19:08:47.110653    2697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18333-1199/.minikube/bin
I0307 19:08:47.111098    2697 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 19:08:47.111824    2697 config.go:182] Loaded profile config "functional-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 19:08:47.112776    2697 ssh_runner.go:195] Run: systemctl --version
I0307 19:08:47.112788    2697 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18333-1199/.minikube/machines/functional-323000/id_rsa Username:docker}
I0307 19:08:47.141440    2697 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2662818906.tar
I0307 19:08:47.141491    2697 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0307 19:08:47.145143    2697 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2662818906.tar
I0307 19:08:47.146677    2697 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2662818906.tar: stat -c "%s %y" /var/lib/minikube/build/build.2662818906.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2662818906.tar': No such file or directory
I0307 19:08:47.146697    2697 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2662818906.tar --> /var/lib/minikube/build/build.2662818906.tar (3072 bytes)
I0307 19:08:47.155134    2697 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2662818906
I0307 19:08:47.158559    2697 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2662818906 -xf /var/lib/minikube/build/build.2662818906.tar
I0307 19:08:47.161784    2697 docker.go:360] Building image: /var/lib/minikube/build/build.2662818906
I0307 19:08:47.161820    2697 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-323000 /var/lib/minikube/build/build.2662818906
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0307 19:08:52.928011    2697 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-323000 /var/lib/minikube/build/build.2662818906: (5.766337459s)
I0307 19:08:52.928091    2697 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2662818906
I0307 19:08:52.931912    2697 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2662818906.tar
I0307 19:08:52.935124    2697 build_images.go:207] Built localhost/my-image:functional-323000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2662818906.tar
I0307 19:08:52.935147    2697 build_images.go:123] succeeded building to: functional-323000
I0307 19:08:52.935151    2697 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.31075775s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-323000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image load --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr
2024/03/07 19:08:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-323000 image load --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr: (2.070714792s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image load --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-323000 image load --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr: (1.46632875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-323000 docker-env) && out/minikube-darwin-arm64 status -p functional-323000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-323000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.392451167s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-323000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image load --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-323000 image load --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr: (1.827624208s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image save gcr.io/google-containers/addon-resizer:functional-323000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image rm gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-323000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-323000 image save --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-323000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-323000
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-323000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-323000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (247.96s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-501000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0307 19:10:44.746273    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:11:12.447581    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/addons-935000/client.crt: no such file or directory
E0307 19:12:37.779947    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:12:37.785723    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:12:37.795921    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:12:37.817668    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:12:37.859745    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:12:37.941066    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:12:38.103250    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:12:38.425410    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:12:39.067561    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:12:40.349636    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:12:42.911812    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:12:48.032902    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:12:58.274743    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-501000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (4m7.765023792s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (247.96s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (8.78s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-501000 -- rollout status deployment/busybox: (7.199111125s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-6fhx7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-glcmr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-zxv47 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-6fhx7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-glcmr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-zxv47 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-6fhx7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-glcmr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-zxv47 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (8.78s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (0.79s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-6fhx7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-6fhx7 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-glcmr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-glcmr -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-zxv47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-501000 -- exec busybox-5b5d89c9d6-zxv47 -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (0.79s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (52.3s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-501000 -v=7 --alsologtostderr
E0307 19:13:18.756183    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
E0307 19:13:59.716954    1620 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18333-1199/.minikube/profiles/functional-323000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-501000 -v=7 --alsologtostderr: (52.067938333s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (52.30s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-501000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (2.41s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2.410524208s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (2.41s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (4.47s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp testdata/cp-test.txt ha-501000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile841411456/001/cp-test_ha-501000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000:/home/docker/cp-test.txt ha-501000-m02:/home/docker/cp-test_ha-501000_ha-501000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m02 "sudo cat /home/docker/cp-test_ha-501000_ha-501000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000:/home/docker/cp-test.txt ha-501000-m03:/home/docker/cp-test_ha-501000_ha-501000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m03 "sudo cat /home/docker/cp-test_ha-501000_ha-501000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000:/home/docker/cp-test.txt ha-501000-m04:/home/docker/cp-test_ha-501000_ha-501000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m04 "sudo cat /home/docker/cp-test_ha-501000_ha-501000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp testdata/cp-test.txt ha-501000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile841411456/001/cp-test_ha-501000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000-m02:/home/docker/cp-test.txt ha-501000:/home/docker/cp-test_ha-501000-m02_ha-501000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000 "sudo cat /home/docker/cp-test_ha-501000-m02_ha-501000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000-m02:/home/docker/cp-test.txt ha-501000-m03:/home/docker/cp-test_ha-501000-m02_ha-501000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m03 "sudo cat /home/docker/cp-test_ha-501000-m02_ha-501000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000-m02:/home/docker/cp-test.txt ha-501000-m04:/home/docker/cp-test_ha-501000-m02_ha-501000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m04 "sudo cat /home/docker/cp-test_ha-501000-m02_ha-501000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp testdata/cp-test.txt ha-501000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile841411456/001/cp-test_ha-501000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000-m03:/home/docker/cp-test.txt ha-501000:/home/docker/cp-test_ha-501000-m03_ha-501000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000 "sudo cat /home/docker/cp-test_ha-501000-m03_ha-501000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000-m03:/home/docker/cp-test.txt ha-501000-m02:/home/docker/cp-test_ha-501000-m03_ha-501000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m02 "sudo cat /home/docker/cp-test_ha-501000-m03_ha-501000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000-m03:/home/docker/cp-test.txt ha-501000-m04:/home/docker/cp-test_ha-501000-m03_ha-501000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m04 "sudo cat /home/docker/cp-test_ha-501000-m03_ha-501000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp testdata/cp-test.txt ha-501000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile841411456/001/cp-test_ha-501000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000-m04:/home/docker/cp-test.txt ha-501000:/home/docker/cp-test_ha-501000-m04_ha-501000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000 "sudo cat /home/docker/cp-test_ha-501000-m04_ha-501000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000-m04:/home/docker/cp-test.txt ha-501000-m02:/home/docker/cp-test_ha-501000-m04_ha-501000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m02 "sudo cat /home/docker/cp-test_ha-501000-m04_ha-501000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 cp ha-501000-m04:/home/docker/cp-test.txt ha-501000-m03:/home/docker/cp-test_ha-501000-m04_ha-501000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-501000 ssh -n ha-501000-m03 "sudo cat /home/docker/cp-test_ha-501000-m04_ha-501000-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (4.47s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (80.41s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m20.407352167s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (80.41s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.07s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.14s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-582000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-582000 --output=json --user=testUser: (3.142390208s)
--- PASS: TestJSONOutput/stop/Command (3.14s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-633000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-633000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.793708ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0da11314-2fae-42dc-b5e5-844e2421842c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-633000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ab88987b-a6d3-41a8-b826-d14f5a4ab0d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18333"}}
	{"specversion":"1.0","id":"c2ec1761-d667-44b9-815f-94f314cdc06f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig"}}
	{"specversion":"1.0","id":"bdacc2e0-413e-4fc3-90ce-0fd37bc976a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c302cd4d-c347-4c09-b14d-1e066e4fc40b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"41b440d9-3805-4ab3-9e01-9edde0151c67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube"}}
	{"specversion":"1.0","id":"fcdaafec-bdd7-4fd2-af99-aff49b411175","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5349cfae-3524-4bcb-b854-0752a4cad53e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-633000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-633000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-059000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-059000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (107.770875ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-059000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18333
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18333-1199/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18333-1199/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-059000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-059000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.791375ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-059000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-059000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.788411583s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.683282542s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-059000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-059000: (3.315776291s)
--- PASS: TestNoKubernetes/serial/Stop (3.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-059000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-059000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (49.967083ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-059000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-059000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-126000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-168000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-168000 --alsologtostderr -v=3: (3.597553917s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-200000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-200000 --alsologtostderr -v=3: (3.775272083s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-168000 -n old-k8s-version-168000: exit status 7 (57.838583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-168000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-200000 -n no-preload-200000: exit status 7 (58.33125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-200000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-612000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-612000 --alsologtostderr -v=3: (3.204548209s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-612000 -n embed-certs-612000: exit status 7 (36.194208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-612000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-156000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-156000 --alsologtostderr -v=3: (1.91370825s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-156000 -n default-k8s-diff-port-156000: exit status 7 (58.034958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-156000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-723000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-723000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-723000 --alsologtostderr -v=3: (3.507110375s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-723000 -n newest-cni-723000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-723000 -n newest-cni-723000: exit status 7 (58.771542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-723000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/281)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-963000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-963000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-963000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-963000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-963000"

                                                
                                                
----------------------- debugLogs end: cilium-963000 [took: 2.2595425s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-963000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-963000
--- SKIP: TestNetworkPlugins/group/cilium (2.49s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-218000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-218000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
Copied to clipboard